Bidirectional-GRU Based on Attention Mechanism for Aspect-level Sentiment Analysis

被引:6
作者
Zhai Penghua [1 ,2 ,3 ]
Zhang Dingyi [1 ,2 ]
机构
[1] Chinese Acad Sci, Shenyang Inst Automat, Shenyang 110016, Liaoning, Peoples R China
[2] Chinese Acad Sci, Inst Robot & Intelligent Mfg, Shenyang 110016, Liaoning, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
来源
ICMLC 2019: 2019 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING | 2019年
关键词
sentimental analysis; deep learning; machine learning; attention mechanism; aspect-level;
D O I
10.1145/3318299.3318368
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aspect-level sentiment analysis is a fine-grained natural language processing task. For traditional deep learning models, they cannot accurately construct the aspect-level sentiment features. Such as, for the sentence of "the movie is very funny, but the seats in the theater is uncomfortable." For the movie, the polarity is positive, but it is negative for seats. To deal with this problem, we propose a bidirectional gated recurrent units neural network model that integrates the attention mechanism to solve the task of aspect-level sentiment analysis. The attention mechanism can focus on the different parts of a sentence when the sentence has several different aspects. Because we use a bidirectional gated recurrent unit, we can get independent context semantic information and get the deeper aspect sentiment information from the front and back, so that we can deal with the specific aspect sentiment polarity. Finally, we experiment on SemEval-2014 dataset and twitter dataset, the result of experiments verified the effectiveness of attention-based bidirectional gated recurrent unit on the aspect sentiment analysis. The model achieves good performance at different datasets and has further improvement comparing to previous models.
引用
收藏
页码:86 / 90
页数:5
相关论文
共 18 条
[1]  
Cho K., 2014, P SSST8 8 WORKSH SYN, P103, DOI 10.3115/v1/w14-4012
[2]  
Cliche M., 2017, P 11 INT WORKSHOP SE, P573, DOI DOI 10.18653/V1/S17-2094
[3]  
Dong L, 2014, PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, P49
[4]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[5]  
Kim Y., 2014, ARXIV14085882, P1, DOI [10.3115/v1/D14-1181, DOI 10.3115/V1/D14-1181]
[6]  
Li L, 2016, IEEE IND ELEC, P3294, DOI 10.1109/IECON.2016.7793303
[7]  
LIU B, 2012, SYNTHESIS LECT HUMAN, V5, P1, DOI [10.1007/978-3-031-02145-9, DOI 10.2200/S00416ED1V01Y201204HLT016, 10.2200/S00416ED1V01Y201204HLT016]
[8]  
Ma DH, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4068
[9]  
Nasukawa T, 2003, K-CAP'03, P70, DOI DOI 10.1145/945645.945658
[10]  
Pennington J., 2014, P 2014 C EMP METH NA, P1532