Aspect-Specific Context Modeling for Aspect-Based Sentiment Analysis

被引:6
作者
Ma, Fang [1 ]
Zhang, Chen [1 ]
Zhang, Bo [1 ]
Song, Dawei [1 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
来源
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I | 2022年 / 13551卷
关键词
Aspect-based sentiment analysis; Context modeling; Pretrained language model;
D O I
10.1007/978-3-031-17120-8_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aspect-based sentiment analysis (ABSA) aims at predicting sentiment polarity (SC) or extracting opinion span (OE) expressed towards a given aspect. Previous work in ABSA mostly relies on rather complicated aspect-specific feature induction. Recently, pretrained language models (PLMs), e.g., BERT, have been used as context modeling layers to simplify the feature induction structures and achieve state-ofthe-art performance. However, such PLM-based context modeling can be not that aspect-specific. Therefore, a key question is left under-explored: how the aspect-specific context can be better modeled through PLMs? To answer the question, we attempt to enhance aspect-specific context modeling with PLM in a non-intrusive manner. We propose three aspectspecific input transformations, namely aspect companion, aspect prompt, and aspect marker. Informed by these transformations, non-intrusive aspect-specific PLMs can be achieved to promote the PLM to pay more attention to the aspect-specific context in a sentence. Additionally, we craft an adversarial benchmark for ABSA (advABSA) to see how aspectspecific modeling can impact model robustness. Extensive experimental results on standard and adversarial benchmarks for SC and OE demonstrate the effectiveness and robustness of the proposed method, yielding new state-of-the-art performance on OE and competitive performance on SC.
引用
收藏
页码:513 / 526
页数:14
相关论文
共 28 条
  • [11] Ma DH, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4068
  • [12] Ma F., 2021, FINDINGS ASS COMPUTA
  • [13] Peng HY, 2020, AAAI CONF ARTIF INTE, V34, P8600
  • [14] Pontiki M., 2014, P 8 INT WORKSHOP SEM, P27, DOI DOI 10.3115/V1/S14-2004
  • [15] Schick T., 2021, P NAACL
  • [16] Song Y, 2020, ARXIV
  • [17] Song YW, 2019, Arxiv, DOI arXiv:1902.09314
  • [18] Tang D., 2016, EMNLP, P214, DOI 10.18653/v1/D16-1021
  • [19] Wang K, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P3229
  • [20] Wang S, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P957