Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition

被引:13
作者
Tao, Huawei [1 ]
Geng, Lei [1 ]
Shan, Shuai [1 ]
Mai, Jingchao [1 ]
Fu, Hongliang [1 ]
机构
[1] Henan Univ Technol, Coll Informat Sci & Engn, Zhengzhou 450001, Peoples R China
关键词
speech emotion recognition; feature extraction; hybrid neural network; multi-head attention mechanism; feature fusion; SPECTRAL FEATURES; MODEL;
D O I
10.3390/e24081025
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods.
引用
收藏
页数:13
相关论文
共 50 条
[31]   Speech Emotion Recognition Based on Convolution Neural Network combined with Random Forest [J].
Zheng, Li ;
Li, Qiao ;
Ban, Hua ;
Liu, Shuhua .
PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, :4143-4147
[32]   Speech Emotion Recognition: Recurrent Neural Networks Compared to SVM and Linear Regression [J].
Kerkeni, Leila ;
Serrestou, Youssef ;
Mbarki, Mohamed ;
Mahjoub, Mohamed Ali ;
Raoof, Kosai ;
Cleder, Catherine .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2017, PT I, 2017, 10613 :451-453
[33]   COMPACT CONVOLUTIONAL RECURRENT NEURAL NETWORKS VIA BINARIZATION FOR SPEECH EMOTION RECOGNITION [J].
Zhao, Huan ;
Xiao, Yufeng ;
Han, Jing ;
Zhang, Zixing .
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, :6690-6694
[34]   Speech Emotion Recognition Based on a Recurrent Neural Network Classification Model [J].
Fonnegra, Ruben D. ;
Diaz, Gloria M. .
ADVANCES IN COMPUTER ENTERTAINMENT TECHNOLOGY, ACE 2017, 2018, 10714 :882-892
[35]   Multi-modal speech emotion recognition using self-attention mechanism and multi-scale fusion framework [J].
Liu, Yang ;
Sun, Haoqin ;
Guan, Wenbo ;
Xia, Yuqi ;
Zhao, Zhen .
SPEECH COMMUNICATION, 2022, 139 :1-9
[36]   Multimodal speech emotion recognition based on multi-scale MFCCs and multi-view attention mechanism [J].
Lin Feng ;
Lu-Yao Liu ;
Sheng-Lan Liu ;
Jian Zhou ;
Han-Qing Yang ;
Jie Yang .
Multimedia Tools and Applications, 2023, 82 :28917-28935
[37]   Multimodal speech emotion recognition based on multi-scale MFCCs and multi-view attention mechanism [J].
Feng, Lin ;
Liu, Lu-Yao ;
Liu, Sheng-Lan ;
Zhou, Jian ;
Yang, Han-Qing ;
Yang, Jie .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) :28917-28935
[38]   SAR Automatic Target Recognition Method Based on Multi-Stream Complex-Valued Networks [J].
Zeng, Zhiqiang ;
Sun, Jinping ;
Han, Zhu ;
Hong, Wen .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[39]   Towards an efficient backbone for preserving features in speech emotion recognition: deep-shallow convolution with recurrent neural network [J].
Goel, Dev Priya ;
Mahajan, Kushagra ;
Ngoc Duy Nguyen ;
Srinivasan, Natesan ;
Lim, Chee Peng .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (03) :2457-2469
[40]   A multi-modal emotion fusion classification method combined expression and speech based on attention mechanism [J].
Dong Liu ;
Longxi Chen ;
Lifeng Wang ;
Zhiyong Wang .
Multimedia Tools and Applications, 2022, 81 :41677-41695