Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer

被引:15
作者
Ullah, Rizwan [1 ]
Asif, Muhammad [2 ]
Shah, Wahab Ali [3 ]
Anjam, Fakhar [2 ]
Ullah, Ibrar [4 ]
Khurshaid, Tahir [5 ]
Wuttisittikulkij, Lunchakorn [1 ]
Shah, Shashi [1 ]
Ali, Syed Mansoor [6 ]
Alibakhshikenari, Mohammad [7 ]
机构
[1] Chulalongkorn Univ, Dept Elect Engn, Wireless Commun Ecosyst Res Unit, Bangkok 10330, Thailand
[2] Univ Sci & Technol, Dept Elect Engn, Main Campus, Bannu 28100, Pakistan
[3] Namal Univ, Dept Elect Engn, Mianwali 42250, Pakistan
[4] Univ Engn & Technol Peshawar, Dept Elect Engn, Kohat Campus, Kohat 25000, Pakistan
[5] Yeungnam Univ, Dept Elect Engn, Gyongsan 38541, South Korea
[6] King Saud Univ, Coll Sci, Dept Phys & Astron, POB 2455, Riyadh 11451, Saudi Arabia
[7] Univ Carlos III Madrid, Dept Signal Theory & Commun, Madrid 28911, Spain
关键词
speech emotion recognition; convolutional neural networks; convolutional Transformer encoder; multi-head attention; spatial features; temporal features; FEATURE-EXTRACTION; VOICE QUALITY; TIME-SERIES; 2D CNN; FEATURES; CLASSIFICATION; SPECTROGRAM; FUSION;
D O I
10.3390/s23136212
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Speech emotion recognition (SER) is a challenging task in human-computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.
引用
收藏
页数:20
相关论文
共 87 条
[1]   Comparative Analysis of Deep Learning Models for Aspect Level Amharic News Sentiment Analysis [J].
Abeje, Bekalu Tadele ;
Salau, Ayodeji Olalekan ;
Ebabu, Habtamu Abate ;
Ayalew, Aleka Melese .
2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA), 2022, :1628-1633
[2]  
Ahmed MR, 2022, Arxiv, DOI arXiv:2112.05666
[3]   Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers [J].
Akcay, Mehmet Berkehan ;
Oguz, Kaya .
SPEECH COMMUNICATION, 2020, 116 (116) :56-76
[4]   E2E-DASR: End-to-end deep learning-based dysarthric automatic speech recognition [J].
Almadhor, Ahmad ;
Irfan, Rizwana ;
Gao, Jiechao ;
Saleem, Nasir ;
Rauf, Hafiz Tayyab ;
Kadry, Seifedine .
EXPERT SYSTEMS WITH APPLICATIONS, 2023, 222
[5]   Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features [J].
Anvarjon, Tursunov ;
Mustaqeem ;
Kwon, Soonil .
SENSORS, 2020, 20 (18) :1-16
[6]   Attention guided 3D CNN-LSTM model for accurate speech based emotion recognition [J].
Atila, Orhan ;
Sengur, Abdulkadir .
APPLIED ACOUSTICS, 2021, 182
[7]  
Badshah AM, 2017, 2017 INTERNATIONAL CONFERENCE ON PLATFORM TECHNOLOGY AND SERVICE (PLATCON), P125
[8]   Speech Emotion Recognition Based on Parallel CNN-Attention Networks with Multi-Fold Data Augmentation [J].
Bautista, John Lorenzo ;
Lee, Yun Kyung ;
Shin, Hyun Soon .
ELECTRONICS, 2022, 11 (23)
[9]  
Bertero D, 2017, INT CONF ACOUST SPEE, P5115, DOI 10.1109/ICASSP.2017.7953131
[10]   Bagged support vector machines for emotion recognition from speech [J].
Bhavan, Anjali ;
Chauhan, Pankaj ;
Hitkul ;
Shah, Rajiv Ratn .
KNOWLEDGE-BASED SYSTEMS, 2019, 184