PRIOR-BERT AND MULTI-TASK LEARNING FOR TARGET-ASPECT-SENTIMENT JOINT DETECTION

被引:7
作者
Ke, Cai [1 ,2 ]
Xiong, Qingyu [1 ,2 ]
Wu, Chao [1 ,2 ]
Liao, Zikai [3 ]
Yi, Hualing [1 ]
机构
[1] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
[2] Chongqing Univ, Key Lab Dependable Serv Comp Cyber Phys Soc, Minist Educ, Chongqing, Peoples R China
[3] Xidian Univ, Xian, Peoples R China
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
Aspect-Based Sentiment Analysis; Target-Aspect-Sentiment; Multi-Task Learning; Joint Detection; MODEL;
D O I
10.1109/ICASSP43922.2022.9747904
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Aspect-Based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task and has become a significant task with real-world scenario value. The challenge of this task is how to generate an effective text representation and construct an end-to-end model that can simultaneously detect (target, aspect, sentiment) triples from a sentence. Besides, the existing models do not take the heavily unbalanced distribution of labels into account and also do not give enough consideration to long-distance dependence of targets and aspect-sentiment pairs. To overcome these challenges, we propose a novel end-to-end model named Prior-BERT and Multi-Task Learning (PBERT-MTL), which can detect all triples more efficiently. We evaluate our model on SemEval-2015 and SemEval-2016 datasets. Extensive results show the validity of our work in this paper. In addition, our model also achieves higher performance on a series of subtasks of target-aspect-sentiment detection. Code is available at https://github.com/CQUPTCaiKe/PBERT-MTL.
引用
收藏
页码:7817 / 7821
页数:5
相关论文
共 27 条
[1]  
[Anonymous], 2010, P 2010 C EMP METH NA
[2]  
Brun C., 2018, P 9 WORKSH COMP APPR, P116, DOI [DOI 10.18653/V1/P17, DOI 10.18653/V1/W18-6217]
[3]  
Cambria E, 2017, SOCIO AFFECT COMPUT, V5, P1, DOI 10.1007/978-3-319-55394-8_1
[4]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[5]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[6]  
HE P, 2018, P AAAI C ART INT, V38, DOI DOI 10.1051/E3SCONF/20183804005
[7]  
Huang Z., 2015, Bidirectional lstm-crf models for sequence tagging
[8]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[9]  
Li Chengxi, 2021, ARXIV210908306
[10]  
Li X, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4194