CTNet: Conversational Transformer Network for Emotion Recognition

被引:147
作者
Lian, Zheng [1 ,2 ]
Liu, Bin [1 ,2 ]
Tao, Jianhua [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Context modeling; Feature extraction; Fuses; Speech processing; Data models; Bidirectional control; Context-sensitive modeling; conversational transformer network (CTNet); conversational emotion recognition; multimodal fusion; speaker-sensitive modeling;
D O I
10.1109/TASLP.2021.3049898
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Emotion recognition in conversation is a crucial topic for its widespread applications in the field of human-computer interactions. Unlike vanilla emotion recognition of individual utterances, conversational emotion recognition requires modeling both context-sensitive and speaker-sensitive dependencies. Despite the promising results of recent works, they generally do not leverage advanced fusion techniques to generate the multimodal representations of an utterance. In this way, they have limitations in modeling the intra-modal and cross-modal interactions. In order to address these problems, we propose a multimodal learning framework for conversational emotion recognition, called conversational transformer network (CTNet). Specifically, we propose to use the transformer-based structure to model intra-modal and cross-modal interactions among multimodal features. Meanwhile, we utilize word-level lexical features and segment-level acoustic features as the inputs, thus enabling us to capture temporal information in the utterance. Additionally, to model context-sensitive and speaker-sensitive dependencies, we propose to use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method. Our method shows an absolute 2.1 similar to 6.2% performance improvement on weighted average F1 over state-of-the-art strategies.
引用
收藏
页码:985 / 1000
页数:16
相关论文
共 66 条
  • [1] Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/2951913.2976746, 10.1145/3022670.2976746]
  • [2] Aguilar G, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P991
  • [3] [Anonymous], 2018, 6 INT C LEARN REPR I
  • [4] Ba J., 2016, ARXIV160706450, V1050, P21
  • [5] Multimodal Machine Learning: A Survey and Taxonomy
    Baltrusaitis, Tadas
    Ahuja, Chaitanya
    Morency, Louis-Philippe
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) : 423 - 443
  • [6] Bergstra J, 2012, J MACH LEARN RES, V13, P281
  • [7] Bishop C.M., 2006, Pattern Recognition and Machine Learning
  • [8] IEMOCAP: interactive emotional dyadic motion capture database
    Busso, Carlos
    Bulut, Murtaza
    Lee, Chi-Chun
    Kazemzadeh, Abe
    Mower, Emily
    Kim, Samuel
    Chang, Jeannette N.
    Lee, Sungbok
    Narayanan, Shrikanth S.
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) : 335 - 359
  • [9] Campbell WM, 2006, INT CONF ACOUST SPEE, P97
  • [10] Chen M., 2017, P 19 ACM INT C MULT, P163, DOI DOI 10.1145/3136755.3136801