Encoding Syntactic Information into Transformers for Aspect-Based Sentiment Triplet Extraction

被引:23
作者
Yuan, Li [1 ]
Wang, Jin [1 ]
Yu, Liang-Chih [2 ]
Zhang, Xuejie [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650000, Peoples R China
[2] Yuan Ze Univ, Dept Informat Management, Taoyuan 32003, Taiwan
基金
中国国家自然科学基金;
关键词
Task analysis; Syntactics; Data mining; Tagging; Pipelines; Transformers; Sentiment analysis; Aspect sentiment triplet extraction; sentiment analysis; syntactic information; transformers;
D O I
10.1109/TAFFC.2023.3291730
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aspect-based sentiment triplet extraction (ASTE) aims to extract triplets consisting of aspect terms and their associated opinion terms and sentiment polarities from sentences, a relatively new and challenging subtask of aspect-based sentiment analysis (ABSA). Previous studies have used either pipeline models or unified tagging schema models. These models ignore the syntactic relationships between the aspect and its corresponding opinion words, which leads them to mistakenly focus on syntactically unrelated words. One feasible option is to use a graph convolution network (GCN) to exploit syntactic information by propagating the representation from the opinion words to the aspect. However, such a method considers all syntactic dependencies to be of the same type and thus may still incorrectly associate unrelated words to the target aspect through the iterations of graph convolutional propagation. Herein, a syntax-aware transformer (SA-Transformer) is proposed to extend the GCN strategy by fully exploiting the dependency types of edges to block inappropriate propagation. The proposed approach can obtain different representations and weights even for edges with the same dependency type according to their adjacent dependency type of edges. Instead of using a GCN layer, we used an L-layer SA transformer to encode syntactic information in the word-pair representation to improve performance. Experimental results on four benchmark datasets show that the proposed model outperforms various previous models for ASTE.
引用
收藏
页码:722 / 735
页数:14
相关论文
共 57 条
[1]   Will Affective Computing Emerge From Foundation Models and General Artificial Intelligence? A First Evaluation of ChatGPT [J].
Amin, Mostafa ;
Cambria, Erik W. ;
Schuller, Bjorn .
IEEE INTELLIGENT SYSTEMS, 2023, 38 (02) :15-23
[2]  
Ben Veyseh AP, 2020, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, P4543
[3]  
Cambria E, 2022, LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P3829
[4]   Distinguishing between facts and opinions for sentiment analysis: Survey and challenges [J].
Chaturvedi, Iti ;
Cambria, Erik ;
Welsch, Roy E. ;
Herrera, Francisco .
INFORMATION FUSION, 2018, 44 :65-77
[5]  
Chen Peng, 2017, P 2017 C EMP METH NA, P452, DOI DOI 10.18653/V1/D17-1047
[6]  
Chen SW, 2021, AAAI CONF ARTIF INTE, V35, P12666
[7]  
Chen ZX, 2021, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, P1474
[8]   Double embedding and bidirectional sentiment dependence detector for aspect sentiment triplet extraction [J].
Dai, Dawei ;
Chen, Tao ;
Xia, Shuyin ;
Wang, Guoyin ;
Chen, Zizhong .
KNOWLEDGE-BASED SYSTEMS, 2022, 253
[9]  
Dai HL, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P5268
[10]  
Dozat Timothy, 2017, 5 INT C LEARN REPR I