Feature selection enhancement and feature space visualization for speech-based emotion recognition

被引:0
作者
Kanwal S. [1 ,2 ]
Asghar S. [1 ]
Ali H. [3 ]
机构
[1] Department of Computer Science, Islamabad Campus, Comsats University, Islamabad
[2] Department of Computer Science, University of Poonch Rawalakot, Azad Kashmir, Rawalakot
[3] College of Science and Engineering, Hamad Bin Khalifa University, Doha
关键词
Feature selection; Feature space visualization; Machine learning; Speaker-independent emotion recognition; Speech emotion recognition; Svm; T-sne graphs;
D O I
10.7717/PEERJ-CS.1091
中图分类号
学科分类号
摘要
Robust speech emotion recognition relies on the quality of the speech features. We present speech features enhancement strategy that improves speech emotion recognition. We used the INTERSPEECH 2010 challenge feature-set. We identified subsets from the features set and applied principle component analysis to the subsets. Finally, the features are fused horizontally. The resulting feature set is analyzed using t-distributed neighbour embeddings (t-SNE) before the application of features for emotion recognition. The method is compared with the state-of-the-art methods used in the literature. The empirical evidence is drawn using two well-known datasets: Berlin Emotional Speech Dataset (EMO-DB) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) for two languages, German and English, respectively. Our method achieved an average recognition gain of 11.5% for six out of seven emotions for the EMO-DB dataset, and 13.8% for seven out of eight emotions for the RAVDESS dataset as compared to the baseline study. © Copyright 2022 Kanwal et al.
引用
收藏
相关论文
共 46 条
[1]  
Akcay MB, Oguz K., Speech emotion recognition: emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers, Speech Communication, 116, 12, pp. 56-76, (2020)
[2]  
Al Machot F, Mosa AH, Dabbour K, Fasih A, Schwarzlmuller C, Ali M, Kyamakya K., A novel real-time emotion detection system from audio streams based on Bayesian quadratic discriminate classifier for Adas, Nonlinear Dynamics and Synchronization (INDS) & 16th International Symposium on Theoretical Electrical Engineering (ISTET), 2011 Joint 3rd International Workshop on, pp. 1-5, (2011)
[3]  
Anagnostopoulos C-N, Iliou T, Giannoukos I., Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011, Artificial Intelligence Review, 43, 2, pp. 155-177, (2015)
[4]  
Bellegarda JR., Data-driven analysis of emotion in text using latent affective folding and embedding, Computational Intelligence, 29, 3, pp. 506-526, (2013)
[5]  
Burkhardt F, Paeschke A, Rolfes M, Sendlmeier WF, Weiss B., A database of German emotional speech, INTERSPEECH 2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, (2005)
[6]  
Chen L, Mao X, Xue Y, Cheng LL., Speech emotion recognition: features and classification models, Digital Signal Processing, 22, 6, pp. 1154-1160, (2012)
[7]  
El Ayadi M, Kamel MS, Karray F., Survey on speech emotion recognition: features, classification schemes, and databases, Pattern Recognition, 44, 3, pp. 572-587, (2011)
[8]  
Gobl C, Chasaide AN., The role of voice quality in communicating emotion, mood and attitude, Speech Communication, 40, 1-2, pp. 189-212, (2003)
[9]  
Haq S, Jackson PJ, Edge J., Audio-visual feature selection and reduction for emotion classification, Proceedings of the International Conference on Auditory-Visual Speech Processing (AVSP’08), (2008)
[10]  
Harati S, Crowell A, Mayberg H, Nemati S., Depression severity classification from speech emotion, 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5763-5766, (2018)