TDFNet: Transformer-Based Deep-Scale Fusion Network for Multimodal Emotion Recognition

被引:6
作者
Zhao, Zhengdao [1 ]
Wang, Yuhua [1 ]
Shen, Guang [1 ]
Xu, Yuezhu [1 ]
Zhang, Jiayuan [2 ]
机构
[1] Harbin Engn Univ, High Performance Comp Res Ctr, Harbin 150001, Peoples R China
[2] Harbin Engn Univ, High Performance Comp Lab, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Feature extraction; Transformers; Correlation; Data models; Speech recognition; Computer architecture; Deep-scale fusion transformer; multimodal embedding; multimodal emotion recognition; mutual correlation; mutual transformer;
D O I
10.1109/TASLP.2023.3316458
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
As deep learning technology research continues to progress, artificial intelligence technology is gradually empowering various fields. To achieve a more natural human-computer interaction experience, how to accurately recognize emotional state of speech interactions has become a new research hotspot. Sequence modeling methods based on deep learning techniques have promoted the development of emotion recognition, but the mainstream methods still suffer from insufficient multimodal information interaction, difficulty in learning emotion-related features, and low recognition accuracy. In this article, we propose a transformer-based deep-scale fusion network (TDFNet) for multimodal emotion recognition, solving the aforementioned problems. The multimodal embedding (ME) module in TDFNet uses pretrained models to alleviate the data scarcity problem by providing a priori knowledge of multimodal information to the model with the help of a large amount of unlabeled data. Furthermore, a mutual transformer (MT) module is introduced to learn multimodal emotional commonality and speaker-related emotional features to improve contextual emotional semantic understanding. In addition, we design a novel emotion feature learning method named the deep-scale transformer (DST), which further improves emotion recognition by aligning multimodal features and learning multiscale emotion features through GRUs with shared weights. To comparatively evaluate the performance of TDFNet, experiments are conducted with the IEMOCAP corpus under three reasonable data splitting strategies. The experimental results show that TDFNet achieves 82.08% WA and 82.57% UA in RA data splitting, which leads to 1.78% WA and 1.17% UA improvements over the previous state-of-the-art method, respectively. Benefiting from the attentively aligned mutual correlations and fine-grained emotion-related features, TDFNet successfully achieves significant improvements in multimodal emotion recognition.
引用
收藏
页码:3771 / 3782
页数:12
相关论文
共 55 条
[51]  
Yeh SL, 2019, INT CONF ACOUST SPEE, P6685, DOI 10.1109/ICASSP.2019.8683293
[52]  
Yoon S, 2019, INT CONF ACOUST SPEE, P2822, DOI 10.1109/ICASSP.2019.8683483
[53]  
Yoon S, 2018, IEEE W SP LANG TECH, P112, DOI 10.1109/SLT.2018.8639583
[54]   TRANSFORMER BASED UNSUPERVISED PRE-TRAINING FOR ACOUSTIC REPRESENTATION LEARNING [J].
Zhang, Ruixiong ;
Wu, Haiwei ;
Li, Wubo ;
Jiang, Dongwei ;
Zou, Wei ;
Li, Xiangang .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6933-6937
[55]   Knowledge Graph Augmented Network Towards Multiview Representation Learning for Aspect-Based Sentiment Analysis [J].
Zhong Q. ;
Ding L. ;
Liu J. ;
Du B. ;
Jin H. ;
Tao D. .
IEEE Transactions on Knowledge and Data Engineering, 2023, 35 (10) :10098-10111