CTNet: Conversational Transformer Network for Emotion Recognition

被引:147
作者
Lian, Zheng [1 ,2 ]
Liu, Bin [1 ,2 ]
Tao, Jianhua [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Context modeling; Feature extraction; Fuses; Speech processing; Data models; Bidirectional control; Context-sensitive modeling; conversational transformer network (CTNet); conversational emotion recognition; multimodal fusion; speaker-sensitive modeling;
D O I
10.1109/TASLP.2021.3049898
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Emotion recognition in conversation is a crucial topic for its widespread applications in the field of human-computer interactions. Unlike vanilla emotion recognition of individual utterances, conversational emotion recognition requires modeling both context-sensitive and speaker-sensitive dependencies. Despite the promising results of recent works, they generally do not leverage advanced fusion techniques to generate the multimodal representations of an utterance. In this way, they have limitations in modeling the intra-modal and cross-modal interactions. In order to address these problems, we propose a multimodal learning framework for conversational emotion recognition, called conversational transformer network (CTNet). Specifically, we propose to use the transformer-based structure to model intra-modal and cross-modal interactions among multimodal features. Meanwhile, we utilize word-level lexical features and segment-level acoustic features as the inputs, thus enabling us to capture temporal information in the utterance. Additionally, to model context-sensitive and speaker-sensitive dependencies, we propose to use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method. Our method shows an absolute 2.1 similar to 6.2% performance improvement on weighted average F1 over state-of-the-art strategies.
引用
收藏
页码:985 / 1000
页数:16
相关论文
共 66 条
  • [21] Grave E, 2018, PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), P3483
  • [22] Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
  • [23] Emotion Generation and Emotion Regulation: One or Two Depends on Your Point of View
    Gross, James J.
    Barrett, Lisa Feldman
    [J]. EMOTION REVIEW, 2011, 3 (01) : 8 - 16
  • [24] Gu Y, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2225
  • [25] Hazarika D, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2594
  • [26] Hazarika Devamanyu, 2018, Proc Conf, V2018, P2122, DOI 10.18653/v1/n18-1193
  • [27] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [28] S-Cache: Toward an Low Latency Service Caching for Edge Clouds
    Huang, Chih-Kai
    Shen, Shan-Hsiang
    Huang, Chin-Ya
    Chin, Tai-Lin
    Shen, Chung-An
    [J]. PROCEEDINGS OF THE 2019 ACM MOBIHOCWORKSHOP ON PERVASIVE SYSTEMS IN THE IOT ERA (PERSIST-IOT '19), 2019, : 49 - 54
  • [29] Kipf TN, 2016, ICLR
  • [30] Emotional Inertia and Psychological Maladjustment
    Kuppens, Peter
    Allen, Nicholas B.
    Sheeber, Lisa B.
    [J]. PSYCHOLOGICAL SCIENCE, 2010, 21 (07) : 984 - 991