Incorporating Word Significance into Aspect-Level Sentiment Analysis

被引:2
作者
Mokhosi, Refuoe [1 ]
Qin, ZhiGuang [1 ]
Liu, Qiao [1 ]
Shikali, Casper [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Xiyuan Ave, Chengdu 611731, Sichuan, Peoples R China
[2] South Eastern Kenya Univ, Dept Informat Technol, Kitui 90200, Kenya
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 17期
基金
中国国家自然科学基金;
关键词
aspect-level sentiment analysis; attention mechanism; novelty decay; incremental interpretation; stretched exponential law; INCREMENTAL INTERPRETATION; ATTENTION; DISTRIBUTIONS; MEMORY;
D O I
10.3390/app9173522
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Aspect-level sentiment analysis has drawn growing attention in recent years, with higher performance achieved through the attention mechanism. Despite this, previous research does not consider some human psychological evidence relating to language interpretation. This results in attention being paid to less significant words especially when the aspect word is far from the relevant context word or when an important context word is found at the end of a long sentence. We design a novel model using word significance to direct attention towards the most significant words, with novelty decay and incremental interpretation factors working together as an alternative for position based models. The interpretation factor represents the maximization of the degree each new encountered word contributes to the sentiment polarity and a counter balancing stretched exponential novelty decay factor represents decaying human reaction as a sentence gets longer. Our findings support the hypothesis that the attention mechanism needs to be applied to the most significant words for sentiment interpretation and that novelty decay is applicable in aspect-level sentiment analysis with a decay factor beta = 0.7.
引用
收藏
页数:15
相关论文
共 57 条
[1]   Incremental interpretation at verbs: restricting the domain of subsequent reference [J].
Altmann, GTM ;
Kamide, Y .
COGNITION, 1999, 73 (03) :247-264
[2]  
Anderson R., 1982, DISCOURSE PROCESS, P292
[3]  
[Anonymous], 2018, ARXIV180501086
[4]  
[Anonymous], 2010, Proceedings of the 23rd International Conference on Computational Linguistics
[5]  
[Anonymous], SHORT PAP
[6]  
[Anonymous], 2015, COMPUTER SCI
[7]  
[Anonymous], P 32 AAAI C ART INT
[8]  
[Anonymous], 2007, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)
[9]  
[Anonymous], 2011, Conference on Empirical Methods in Natural Language Processing
[10]  
[Anonymous], 2015, ADV NEURAL INFORM PR