Prompt-Based Generative Multi-label Emotion Prediction with Label Contrastive Learning

被引:4
作者
Chai, Yuyang [1 ]
Teng, Chong [1 ]
Fei, Hao [2 ]
Wu, Shengqiong [1 ]
Li, Jingye [1 ]
Cheng, Ming [3 ]
Ji, Donghong [1 ]
Li, Fei [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Minist Educ, Key Lab Aerosp Informat Secur & Trusted Comp, Wuhan, Peoples R China
[2] Natl Univ Singapore, Sch Comp, Singapore, Singapore
[3] Zhengzhou Univ, Affiliated Hosp 1, Zhengzhou, Peoples R China
来源
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT I | 2022年 / 13551卷
基金
中国国家自然科学基金;
关键词
Emotion prediction; Text generation; Prompt learning; Contrastive learning;
D O I
10.1007/978-3-031-17120-8_43
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label emotion prediction, which aims to predict emotion labels from text, attracts increasing attention recently. It is ubiquitous that emotion labels are highly correlated in this task. Existing state-of-the-art models solve multi-label emotion prediction in sequenceto-sequence (Seq2Seq) manner, while such label correlations are merely leveraged in decoding side. In this work, we propose an emotion prediction framework to jointly generate emotion labels and template sentences via Seq2Seq language model. On the one hand, our template-based natural language generation method makes better use of generative language model compared with generating label sequences in the prior Seq2Seqbased generative classification model. On the other hand, we introduce the Correlation-based Label Prompts (CLP) through soft prompt learning and contrastive learning, which enables our model to further consider emotion label correlations in encoding side. To demonstrate the effectiveness of our prompt-based generative multi-label emotion prediction model, we perform experiments on the GoEmotions and SemEval 2018 datasets, achieving competitive results, outperforming 7 baselines w.r.t. 3 evaluation metrics. In-depth analyses show the generation manner is much more impressive compared with generating label sequences and our model is particularly effective in label correlation modeling.
引用
收藏
页码:551 / 563
页数:13
相关论文
共 33 条
  • [1] Baziotis Christos., 2018, P 12 INT WORKSH SEM, P245
  • [2] Chen T, 2020, PR MACH LEARN RES, V119
  • [3] Demszky D, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4040
  • [4] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [5] Fei H., 2021, IEEE T NEUR NET LEAR, P1
  • [6] Fei H., 2020, P 58 ANN M ASS COMP, P7014
  • [7] Fei H, 2022, PR MACH LEARN RES
  • [8] Making Decision like Human: Joint Aspect Category Sentiment Analysis and Rating Prediction with Fine-to-Coarse Reasoning
    Fei, Hao
    Li, Jingye
    Ren, Yafeng
    Zhang, Meishan
    Ji, Donghong
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 3042 - 3051
  • [9] Fei H, 2021, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, P549
  • [10] Fei H, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P2151