Methods Study of a Unified ABSA Generation Framework Based on Pre-trained Model Induced Dependency Tree

被引:0
作者
Xu, Peiyan [1 ]
Jin, Guozhe [1 ]
Zhao, Yahui [1 ]
Cui, Rongyi [1 ]
机构
[1] Yanbian Univ, Dept Comp Sci & Technol, Yanji, Peoples R China
来源
2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND ARTIFICIAL INTELLIGENCE, CCAI 2024 | 2024年
关键词
syntactic analysis; pre-trained model; aspect-based sentiment analysis; BART model;
D O I
10.1109/CCAI61966.2024.10603255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aspect-based Sentiment analysis (ABSA) is a fine-grained sentiment analysis designed to extract the sentimental polarity and sentimental words related to aspect words. Sentiment analysis plays an important role in extracting users' sentiments. Dependency analysis, also called syntactic analysis, is used to study the grammatical structure of sentences and provide information about the connections between sentence words. In order to improve the accuracy of ABSA, researchers integrated syntactic information into ABSA to obtain the correct dependencies between words in the sentence, making ABSA achieve unprecedented good results on syntactic information. However, now the syntactic information of ABSA is provided by syntactic analysis tools. It pays too much attention to the correct sentence structure, which is easy to confuse the colloquial sentences, and lacks the flexibility of downstream task application. Therefore, the pre-trained model induced syntax tree with rich language knowledge was introduced, and the AE, SC and AESC tasks were performed combined with the unified ABSA generation framework. The results show that the syntactic tree induced by the pre-trained model combined with the unified ABSA generation framework enables the three subtasks, AE, ALSC and AESC, to achieve superior results on the Twitter dataset.
引用
收藏
页码:339 / 343
页数:5
相关论文
共 17 条
  • [1] Chen Z, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P3685
  • [2] Dai JQ, 2021, Arxiv, DOI arXiv:2104.04986
  • [3] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [4] He RD, 2019, Arxiv, DOI arXiv:1906.06906
  • [5] Hu MH, 2019, Arxiv, DOI arXiv:1906.03820
  • [6] Liu P, 2015, P 2015 C EMP METH NA
  • [7] Liu YH, 2019, Arxiv, DOI [arXiv:1907.11692, DOI 10.48550/ARXIV.1907.11692, 10.48550/arXiv.1907.11692]
  • [8] Mao Y, 2021, AAAI CONF ARTIF INTE, V35, P13543
  • [9] Sun K, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P5679
  • [10] Wang K, 2020, Arxiv, DOI arXiv:2004.12362