MFF-SAug: Multi feature fusion with spectrogram augmentation of speech emotion recognition using convolution neural network

被引:30
作者
Jothimani, S. [1 ]
Premalatha, K. [1 ]
机构
[1] Bannari Amman Inst Technol, Dept Comp Sci & Engn, Sathyamangalam 638401, India
关键词
Augmentation; Contrastive loss; MFCC; RMS; Speech emotion recognition; ZCR; ACCURACY;
D O I
10.1016/j.chaos.2022.112512
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The Speech Emotion Recognition (SER) is a complex task because of the feature selections that reflect the emotion from the human speech. The SER plays a vital role and is very challenging in Human-Computer Interaction (HCI). Traditional methods provide inconsistent feature extraction for emotion recognition. The primary motive of this paper is to improve the accuracy of the classification of eight emotions from the human voice. The proposed MFF-SAug research, Enhance the emotion prediction from the speech by Noise Removal, White Noise Injection, and Pitch Tuning. On pre-processed speech signals, the feature extraction techniques Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), and Root Mean Square (RMS) are applied and combined to achieve substantial performance used for emotion recognition. The augmentation applies to the raw speech for a contrastive loss that maximizes agreement between differently augmented samples in the latent space and reconstructs the loss of input representation for better accuracy prediction. A state-of-the-art Convolution Neural Network (CNN) is proposed for enhanced speech representation learning and voice emotion classification. Further, this MFF-SAug method is compared with the CNN + LSTM model. The experi-mental analysis was carried out using the RAVDESS, CREMA, SAVEE, and TESS datasets. Thus, the classifier achieved a robust representation for speech emotion recognition with an accuracy of 92.6 %, 89.9, 84.9 %, and 99.6 % for RAVDESS, CREMA, SAVEE, and TESS datasets, respectively.
引用
收藏
页数:18
相关论文
共 51 条
[11]   METHODS FOR STUDYING COINCIDENCES [J].
DIACONIS, P ;
MOSTELLER, F .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1989, 84 (408) :853-861
[12]  
Giannakopoulos T., 2014, INTRO AUDIO ANAL, V1st
[13]  
Graves A, 2013, 2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), P273, DOI 10.1109/ASRU.2013.6707742
[14]   Bi-modal emotion recognition from expressive face and body gestures [J].
Gunes, Hatice ;
Piccardi, Massimo .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2007, 30 (04) :1334-1345
[15]  
Haiqing Zheng, 2019, 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). Proceedings, P493
[16]  
Han K, 2014, INTERSPEECH, P223
[17]   Cloud-Assisted Multiview Video Summarization Using CNN and Bidirectional LSTM [J].
Hussain, Tanveer ;
Muhammad, Khan ;
Ullah, Amin ;
Cao, Zehong ;
Baik, Sung Wook ;
de Albuquerque, Victor Hugo C. .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (01) :77-86
[18]  
Ioffe Sergey, 2015, P MACHINE LEARNING R, V37, P448
[19]   Speech emotion recognition with deep convolutional neural networks [J].
Issa, Dias ;
Demirci, M. Fatih ;
Yazici, Adnan .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 59
[20]  
Kandali A. B., 2008, TENCON 2008 2008 IEE, P1