AELA-DLSTMs: Attention-enabled and location-aware double LSTMs for aspect-level sentiment classification

Shuang, Kai, Ren, Xintao, Yang, Qianqian, Li, Rui and Loo, Jonathan ORCID: https://orcid.org/0000-0002-2197-8126 (2018) AELA-DLSTMs: Attention-enabled and location-aware double LSTMs for aspect-level sentiment classification. Neurocomputing, 334. pp. 25-34. ISSN 0925-2312

[thumbnail of AELA-jonathan.pdf]
Preview
PDF
AELA-jonathan.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (1MB) | Preview

Abstract

Aspect-level sentiment classification, as a fine-grained task in sentiment classification, aiming to extract sentiment polarity from opinions towards a specific aspect word, has been made tremendous improvements in recent years. There are three key factors for aspect-level sentiment classification: contextual semantic information towards aspect words, correlations between aspect words and their context words, and location information of context words with regard to aspect words. In this paper, two models named AE-DLSTMs (Attention-Enabled Double LSTMs) and AELA-DLSTMs (Attention-Enabled and Location-Aware Double LSTMs) are proposed for aspect-level sentiment classification. AE-DLSTMs takes full advantage of the DLSTMs (Double LSTMs) which can capture the contextual semantic information in both forward and backward directions towards aspect words. Meanwhile, a novel attention weights generating method that combines aspect words with their contextual semantic information is designed so that those weights can make better use of the correlations between aspect words and their context words. Besides, we observe that context words with different distances or different directions towards aspect words have different contributions in sentiment polarity. Based on AE-DLSTMs, the location information of context words by assigning different weights is incorporated in AELA-DLSTMs to improve the accuracy. Experiments are conducted on two English datasets and one Chinese dataset. The experimental results have confirmed that our models can make remarkable improvements and outperform all the baseline models in all datasets, improving the accuracy of 1.67 percent to 4.77 percent in different datasets compared with baseline models.

Item Type: Article
Identifier: 10.1016/j.neucom.2018.11.084
Keywords: Neural Network, Long Short-Term Memory, Attention Mechanism, Aspect-Level Sentiment Classification
Subjects: Computing > Intelligent systems
Related URLs:
Depositing User: Users 4141 not found.
Date Deposited: 11 Jan 2019 15:40
Last Modified: 04 Nov 2024 12:00
URI: https://repository.uwl.ac.uk/id/eprint/5740

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item

Menu