Cluster-Level Contrastive Learning for Emotion Recognition in Conversations

被引:28
作者
Yang, Kailai [1 ,2 ]
Zhang, Tianlin [1 ,2 ]
Alhuzali, Hassan [3 ]
Ananiadou, Sophia [1 ,2 ]
机构
[1] Univ Manchester, NaCTeM, Manchester M13 9PL, England
[2] Univ Manchester, Dept Comp Sci, Manchester M13 9PL, England
[3] Umm Al Qura Univ, Coll Comp & Informat Syst, Mecca 24382, Saudi Arabia
基金
英国生物技术与生命科学研究理事会;
关键词
Emotion recognition; Prototypes; Linguistics; Task analysis; Semantics; Training; Adaptation models; Cluster-level contrastive learning; emotion recognition in conversations; pre-trained knowledge adapters; valence-arousal-dominance; DIALOGUE; FRAMEWORK;
D O I
10.1109/TAFFC.2023.3243463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key challenge for Emotion Recognition in Conversations (ERC) is to distinguish semantically similar emotions. Some works utilise Supervised Contrastive Learning (SCL) which uses categorical emotion labels as supervision signals and contrasts in high-dimensional semantic space. However, categorical labels fail to provide quantitative information between emotions. ERC is also not equally dependent on all embedded features in the semantic space, which makes the high-dimensional SCL inefficient. To address these issues, we propose a novel low-dimensional Supervised Cluster-level Contrastive Learning (SCCL) method, which first reduces the high-dimensional SCL space to a three-dimensional affect representation space Valence-Arousal-Dominance (VAD), then performs cluster-level contrastive learning to incorporate measurable emotion prototypes. To help modelling the dialogue and enriching the context, we leverage the pre-trained knowledge adapters to infuse linguistic and factual knowledge. Experiments show that our method achieves new state-of-the-art results with 69.81% on IEMOCAP, 65.7% on MELD, and 62.51% on DailyDialog datasets. The analysis also proves that the VAD space is not only suitable for ERC but also interpretable, with VAD prototypes enhancing its performance and stabilising the training of SCCL. In addition, the pre-trained knowledge adapters benefit the performance of the utterance encoder and SCCL. Our code is available at: https://github.com/SteveKGYang/SCCLI
引用
收藏
页码:3269 / 3280
页数:12
相关论文
共 50 条
  • [1] Disentangled Variational Autoencoder for Emotion Recognition in Conversations
    Yang, Kailai
    Zhang, Tianlin
    Ananiadou, Sophia
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (02) : 508 - 518
  • [2] Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations
    Meng, Tao
    Shou, Yuntao
    Ai, Wei
    Yin, Nan
    Li, Keqin
    IEEE Transactions on Artificial Intelligence, 2024, 5 (12): : 1 - 15
  • [3] SigRep: Toward Robust Wearable Emotion Recognition With Contrastive Representation Learning
    Dissanayake, Vipula
    Seneviratne, Sachith
    Rana, Rajib
    Wen, Elliott
    Kaluarachchi, Tharindu
    Nanayakkara, Suranga
    IEEE ACCESS, 2022, 10 : 18105 - 18120
  • [4] Superpixel Prior Cluster-Level Contrastive Clustering Network for Large-Scale Urban Hyperspectral Images and Vehicle Detection
    Li, Tiancong
    Cai, Yaoming
    Zhang, Yongshan
    Cai, Zhihua
    Jiang, Guozhu
    Liu, Xiaobo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) : 2019 - 2031
  • [5] A Self-Fusion Network Based on Contrastive Learning for Group Emotion Recognition
    Wang, Xingzhi
    Zhang, Dong
    Tan, Hong-Zhou
    Lee, Dah-Jye
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (02) : 458 - 469
  • [6] Deep emotion recognition in textual conversations: a survey
    Pereira, Patricia
    Moniz, Helena
    Carvalho, Joao Paulo
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [7] Emotion recognition in conversations with emotion shift detection based on multi-task learning
    Gao, Qingqing
    Cao, Biwei
    Guan, Xin
    Gu, Tianyun
    Bao, Xing
    Wu, Junyan
    Liu, Bo
    Cao, Jiuxin
    KNOWLEDGE-BASED SYSTEMS, 2022, 248
  • [8] Recognition of Emotions in User-Generated Videos through Frame-Level Adaptation and Emotion Intensity Learning
    Zhang, Haimin
    Xu, Min
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 881 - 891
  • [9] Contrastive Learning for Domain Transfer in Cross-Corpus Emotion Recognition
    Yin, Yufeng
    Lu, Liupei
    Xiao, Yao
    Xu, Zhi
    Cai, Kaijie
    Jiang, Haonan
    Gratch, Jonathan
    Soleymani, Mohammad
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [10] Emotion Recognition Using EEG Signals and Audiovisual Features with Contrastive Learning
    Lee, Ju-Hwan
    Kim, Jin-Young
    Kim, Hyoung-Gook
    BIOENGINEERING-BASEL, 2024, 11 (10):