ACEs: Unsupervised Multi-label Aspect Detection with Aspect-category Experts

被引:0
作者
Yang, Lingyu [1 ]
Li, Hongjia [1 ]
Li, Lei [1 ]
Yuan, Chun [1 ]
Xia, Shu-Tao [1 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
关键词
aspect detection; multi-label; unsupervised;
D O I
10.1109/IJCNN55064.2022.9892431
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised aspect detection (UAD) aims to identify the aspect categories mentioned in product reviews automatically. Existing unsupervised methods mainly focus on single-label prediction which cannot work well for the realistic application scene since a review usually contain multiple aspect categories. Recent attempts alleviate this issue by setting a threshold. However, the imbalance of the number of category-related words in review segments makes these methods difficult to find optimal thresholds to recall all categories, which is a common but neglected phenomenon. In this paper, we propose a novel unsupervised method termed Aspect-Category Experts (ACEs) to address this problem. Our goal is to train a set of aspect-category experts to encode the sentence in parallel, where experts and aspect categories correspond one-to-one. Experts in different aspects weight the embedding of representative words with aspect-specific attention to avoid the negative impact of accumulation. Besides, to enhance the complementarity between different experts to reduce inter-class feature entanglement, we construct a novel mutual exclusion loss (ME loss) to improve the aspect detection performance. Extensive experimental results on four datasets demonstrate that our proposed ACEs model outperforms the previous state-of-the-art methods.
引用
收藏
页数:8
相关论文
共 40 条
  • [1] Angelidis S, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P3675
  • [2] Angelidis Stefanos, 2018, EMNLP
  • [3] [Anonymous], 2016, P 10 INT WORKSHOP SE, P19
  • [4] [Anonymous], 2020, UAI
  • [5] [Anonymous], 2010, P 2010 C EMP METH NA
  • [6] [Anonymous], P 52 ANN M ASS COMP
  • [7] Brody Samuel, 2010, HLT NAACL
  • [8] Brody Samuel., 2010, P 2010 ANN C N AM CH, P804
  • [9] Chen Ting, 2020, ARXIV200205709
  • [10] Chen ZY, 2014, PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P347