Feature selection enhancement and feature space visualization for speech-based emotion recognition

被引:0
作者
Kanwal S. [1 ,2 ]
Asghar S. [1 ]
Ali H. [3 ]
机构
[1] Department of Computer Science, Islamabad Campus, Comsats University, Islamabad
[2] Department of Computer Science, University of Poonch Rawalakot, Azad Kashmir, Rawalakot
[3] College of Science and Engineering, Hamad Bin Khalifa University, Doha
关键词
Feature selection; Feature space visualization; Machine learning; Speaker-independent emotion recognition; Speech emotion recognition; Svm; T-sne graphs;
D O I
10.7717/PEERJ-CS.1091
中图分类号
学科分类号
摘要
Robust speech emotion recognition relies on the quality of the speech features. We present speech features enhancement strategy that improves speech emotion recognition. We used the INTERSPEECH 2010 challenge feature-set. We identified subsets from the features set and applied principle component analysis to the subsets. Finally, the features are fused horizontally. The resulting feature set is analyzed using t-distributed neighbour embeddings (t-SNE) before the application of features for emotion recognition. The method is compared with the state-of-the-art methods used in the literature. The empirical evidence is drawn using two well-known datasets: Berlin Emotional Speech Dataset (EMO-DB) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) for two languages, German and English, respectively. Our method achieved an average recognition gain of 11.5% for six out of seven emotions for the EMO-DB dataset, and 13.8% for seven out of eight emotions for the RAVDESS dataset as compared to the baseline study. © Copyright 2022 Kanwal et al.
引用
收藏
相关论文
共 46 条
[41]  
Wang K, An N, Li BN, Zhang Y, Li L., Speech emotion recognition using fourier parameters, IEEE Transactions on Affective Computing, 6, 1, pp. 69-75, (2015)
[42]  
Xu M, Zhang F, Khan SU., Improve accuracy of speech emotion recognition with attention head fusion, 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), pp. 1058-1064, (2020)
[43]  
Yang N, Yuan J, Zhou Y, Demirkol I, Duan Z, Heinzelman W, Sturge-Apple M., Enhanced multiclass SVM with thresholding fusion for speech-based emotion classification, International Journal of Speech Technology, 20, 1, pp. 27-41, (2017)
[44]  
Yu C, Tapus A., Interactive robot learning for multimodal emotion recognition, International Conference on Social Robotics, pp. 633-642, (2019)
[45]  
Zhao X, Zhang S, Lei B., Robust emotion recognition in noisy speech via sparse representation, Neural Computing and Applications, 24, 7-8, pp. 1539-1553, (2014)
[46]  
Zheng C, Wang C, Sun W, Jia N., Research on speech emotional feature extraction based on multidimensional feature fusion, International Conference on Advanced Data Mining and Applications, pp. 535-547, (2019)