Multimodal emotion recognition based on audio and text by using hybrid attention networks

被引:28
作者
Zhang, Shiqing [1 ]
Yang, Yijiao [1 ,2 ]
Chen, Chen [1 ]
Liu, Ruixin [1 ,2 ]
Tao, Xin [1 ]
Guo, Wenping [1 ]
Xu, Yicheng [3 ]
Zhao, Xiaoming [1 ]
机构
[1] Taizhou Univ, Inst Intelligent Informat Proc, Taizhou 318000, Zhejiang, Peoples R China
[2] Zhejiang Univ Sci & Technol, Sch Sci, Hangzhou 310023, Zhejiang, Peoples R China
[3] Taizhou Vocat & Tech Coll, Sch Informat Technol Engn, Taizhou 318000, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Multimodal emotion recognition; Deep learning; Local intra-modal attention; Cross-modal attention; Global inter-modal attention; NEURAL-NETWORKS; SPEECH; FEATURES;
D O I
10.1016/j.bspc.2023.105052
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Multimodal Emotion Recognition (MER) has recently become a popular and challenging topic. The most key challenge in MER is how to effectively fuse multimodal information. Most of prior works may not fully consider the inter-modal and intra-modal attention mechanism to jointly learn intra-modal and inter-modal emotional salient information for further improving the performance of MER. To address this problem, this paper proposes a new MER framework based on audio and text by using Hybrid Attention Networks (MER-HAN). The proposed MER-HAN combines three different attention mechanisms such as the local intra-modal attention, the cross-modal attention, and the global inter-modal attention to effectively learn intra-modal and inter-modal emotional salient features for MER. Specifically, an Audio and Text Encoder (ATE) block equipped with deep learning techniques with the local intra-modal attention mechanism is initially designed to learn high-level audio and text feature representations from the corresponding audio and text sequences, respectively. Then, a Cross-Modal Attention (CMA) block is presented to jointly capture high-level shared feature representations across audio and text modalities. Finally, a Multimodal Emotion Classification (MEC) block with the global inter-modal attention mechanism is provided to obtain final MER results. Extensive experiments conducted on two public multimodal emotional datasets, i.e., IEMOCAP and MELD datasets, show the advantage of the proposed MER-HAN on MER tasks.
引用
收藏
页数:10
相关论文
共 81 条
[11]   Deep learning-based facial emotion recognition for human-computer interaction applications [J].
Chowdary, M. Kalpana ;
Nguyen, Tu N. ;
Hemanth, D. Jude .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (32) :23311-23328
[12]   A Review and Meta-Analysis of Multimodal Affect Detection Systems [J].
D'Mello, Sidney K. ;
Kory, Jacqueline .
ACM COMPUTING SURVEYS, 2015, 47 (03)
[13]  
DEERWESTER S, 1990, J AM SOC INFORM SCI, V41, P391, DOI 10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO
[14]  
2-9
[15]   A Survey of Textual Emotion Recognition and Its Challenges [J].
Deng, Jiawen ;
Ren, Fuji .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (01) :49-67
[16]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[17]   Survey on speech emotion recognition: Features, classification schemes, and databases [J].
El Ayadi, Moataz ;
Kamel, Mohamed S. ;
Karray, Fakhri .
PATTERN RECOGNITION, 2011, 44 (03) :572-587
[18]   FINDING STRUCTURE IN TIME [J].
ELMAN, JL .
COGNITIVE SCIENCE, 1990, 14 (02) :179-211
[19]   The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing [J].
Eyben, Florian ;
Scherer, Klaus R. ;
Schuller, Bjoern W. ;
Sundberg, Johan ;
Andre, Elisabeth ;
Busso, Carlos ;
Devillers, Laurence Y. ;
Epps, Julien ;
Laukka, Petri ;
Narayanan, Shrikanth S. ;
Truong, Khiet P. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2016, 7 (02) :190-202
[20]   Context- and Knowledge-Aware Graph Convolutional Network for Multimodal Emotion Recognition [J].
Fu, Yahui ;
Okada, Shogo ;
Wang, Longbiao ;
Guo, Lili ;
Song, Yaodong ;
Liu, Jiaxing ;
Dang, Jianwu .
IEEE MULTIMEDIA, 2022, 29 (03) :91-99