Cluster-Level Contrastive Learning for Emotion Recognition in Conversations

被引:28
作者
Yang, Kailai [1 ,2 ]
Zhang, Tianlin [1 ,2 ]
Alhuzali, Hassan [3 ]
Ananiadou, Sophia [1 ,2 ]
机构
[1] Univ Manchester, NaCTeM, Manchester M13 9PL, England
[2] Univ Manchester, Dept Comp Sci, Manchester M13 9PL, England
[3] Umm Al Qura Univ, Coll Comp & Informat Syst, Mecca 24382, Saudi Arabia
基金
英国生物技术与生命科学研究理事会;
关键词
Emotion recognition; Prototypes; Linguistics; Task analysis; Semantics; Training; Adaptation models; Cluster-level contrastive learning; emotion recognition in conversations; pre-trained knowledge adapters; valence-arousal-dominance; DIALOGUE; FRAMEWORK;
D O I
10.1109/TAFFC.2023.3243463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key challenge for Emotion Recognition in Conversations (ERC) is to distinguish semantically similar emotions. Some works utilise Supervised Contrastive Learning (SCL) which uses categorical emotion labels as supervision signals and contrasts in high-dimensional semantic space. However, categorical labels fail to provide quantitative information between emotions. ERC is also not equally dependent on all embedded features in the semantic space, which makes the high-dimensional SCL inefficient. To address these issues, we propose a novel low-dimensional Supervised Cluster-level Contrastive Learning (SCCL) method, which first reduces the high-dimensional SCL space to a three-dimensional affect representation space Valence-Arousal-Dominance (VAD), then performs cluster-level contrastive learning to incorporate measurable emotion prototypes. To help modelling the dialogue and enriching the context, we leverage the pre-trained knowledge adapters to infuse linguistic and factual knowledge. Experiments show that our method achieves new state-of-the-art results with 69.81% on IEMOCAP, 65.7% on MELD, and 62.51% on DailyDialog datasets. The analysis also proves that the VAD space is not only suitable for ERC but also interpretable, with VAD prototypes enhancing its performance and stabilising the training of SCCL. In addition, the pre-trained knowledge adapters benefit the performance of the utterance encoder and SCCL. Our code is available at: https://github.com/SteveKGYang/SCCLI
引用
收藏
页码:3269 / 3280
页数:12
相关论文
共 50 条
  • [21] EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition
    Gerczuk, Maurice
    Amiriparian, Shahin
    Ottl, Sandra
    Schuller, Bjorn W. W.
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (02) : 1472 - 1487
  • [22] MLAL: Multiple Prompt Learning and Generation of Auxiliary Labeled Utterances for Emotion Recognition in Conversations
    Gou, Zhinan
    Chen, Yuxin
    Long, Yuchen
    Jia, Mengyao
    Liu, Zhili
    Zhu, Jun
    MACHINE LEARNING WITH APPLICATIONS, 2025, 20
  • [23] Emotion-Semantic-Aware Dual Contrastive Learning for Epistemic Emotion Identification of Learner-Generated Reviews in MOOCs
    Liu, Zhi
    Wen, Chaodong
    Su, Zhu
    Liu, Sannyuya
    Sun, Jianwen
    Kong, Weizheng
    Yang, Zongkai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16464 - 16477
  • [24] Knowing What and Why: Causal emotion entailment for emotion recognition in conversations
    Liu, Hao
    Wei, Runguo
    Tu, Geng
    Lin, Jiali
    Jiang, Dazhi
    Cambria, Erik
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 274
  • [25] Conversational transfer learning for emotion recognition
    Hazarika, Devamanyu
    Poria, Soujanya
    Zimmermann, Roger
    Mihalcea, Rada
    INFORMATION FUSION, 2021, 65 : 1 - 12
  • [26] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Haoning Kan
    Jiale Yu
    Jiajin Huang
    Zihe Liu
    Heqian Wang
    Haiyan Zhou
    Applied Intelligence, 2023, 53 : 27207 - 27225
  • [27] A contrastive self-supervised learning method for source-free EEG emotion recognition
    Wang, Yingdong
    Ruan, Qunsheng
    Wu, Qingfeng
    Wang, Shuocheng
    USER MODELING AND USER-ADAPTED INTERACTION, 2025, 35 (01)
  • [28] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Kan, Haoning
    Yu, Jiale
    Huang, Jiajin
    Liu, Zihe
    Wang, Heqian
    Zhou, Haiyan
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27207 - 27225
  • [29] DialoguePCN: Perception and Cognition Network for Emotion Recognition in Conversations
    Wu, Xiaolong
    Feng, Chang
    Xu, Mingxing
    Zheng, Thomas Fang
    Hamdulla, Askar
    IEEE ACCESS, 2023, 11 : 141251 - 141260
  • [30] Multi-Label Multimodal Emotion Recognition With Transformer-Based Fusion and Emotion-Level Representation Learning
    Le, Hoai-Duy
    Lee, Guee-Sang
    Kim, Soo-Hyung
    Kim, Seungwon
    Yang, Hyung-Jeong
    IEEE ACCESS, 2023, 11 : 14742 - 14751