Fusion of PCA and ICA in Statistical Subset Analysis for Speech Emotion Recognition

被引:0
作者
Kingeski, Rafael [1 ]
Henning, Elisa [2 ]
Paterno, Aleksander S. [1 ]
机构
[1] Santa Catarina State Univ UDESC, Ctr Sci & Technol, Dept Elect Engn, BR-89219710 Joinville, SC, Brazil
[2] Santa Catarina State Univ UDESC, Ctr Technol Sci, Dept Math, Rua Paulo Malschitzki,200 Zona Ind Norte, BR-89219710 Joinville, SC, Brazil
关键词
speech emotion recognition; feature selection; PCA; ICA; SVM; Kruskal-Wallis; COMPONENT ANALYSIS;
D O I
10.3390/s24175704
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Speech emotion recognition is key to many fields, including human-computer interaction, healthcare, and intelligent assistance. While acoustic features extracted from human speech are essential for this task, not all of them contribute to emotion recognition effectively. Thus, reduced numbers of features are required within successful emotion recognition models. This work aimed to investigate whether splitting the features into two subsets based on their distribution and then applying commonly used feature reduction methods would impact accuracy. Filter reduction was employed using the Kruskal-Wallis test, followed by principal component analysis (PCA) and independent component analysis (ICA). A set of features was investigated to determine whether the indiscriminate use of parametric feature reduction techniques affects the accuracy of emotion recognition. For this investigation, data from three databases-Berlin EmoDB, SAVEE, and RAVDES-were organized into subsets according to their distribution in applying both PCA and ICA. The results showed a reduction from 6373 features to 170 for the Berlin EmoDB database with an accuracy of 84.3%; a final size of 130 features for SAVEE, with a corresponding accuracy of 75.4%; and 150 features for RAVDESS, with an accuracy of 59.9%.
引用
收藏
页数:17
相关论文
共 49 条
[1]   Two-Way Feature Extraction for Speech Emotion Recognition Using Deep Learning [J].
Aggarwal, Apeksha ;
Srivastava, Akshat ;
Agarwal, Ajay ;
Chahal, Nidhi ;
Singh, Dilbag ;
Alnuaim, Abeer Ali ;
Alhadlaq, Aseel ;
Lee, Heung-No .
SENSORS, 2022, 22 (06)
[2]   Improved speech emotion recognition with Mel frequency magnitude coefficient [J].
Ancilin, J. ;
Milton, A. .
APPLIED ACOUSTICS, 2021, 179
[3]  
[Anonymous], 2008, AVSP, DOI DOI 10.1109/ICSMC.2005.1571679
[4]   Shape-based modeling of the fundamental frequency contour for emotion detection in speech [J].
Arias, Juan Pablo ;
Busso, Carlos ;
Yoma, Nestor Becerra .
COMPUTER SPEECH AND LANGUAGE, 2014, 28 (01) :278-294
[5]  
Boersma P., 2023, Praat: doing phonetics by computer Computer program
[6]  
Boser B. E., 1992, Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, P144, DOI 10.1145/130385.130401
[7]  
Brookes M., 2024, VOICEBOX: Speech Processing Toolbox for MATLAB
[8]  
Burkhardt F., 2005, P 9 EUROPEAN C SPEEC, P1517
[9]   Speech emotion recognition: Features and classification models [J].
Chen, Lijiang ;
Mao, Xia ;
Xue, Yuli ;
Cheng, Lee Lung .
DIGITAL SIGNAL PROCESSING, 2012, 22 (06) :1154-1160
[10]   Emotion recognition in human-computer interaction [J].
Cowie, R ;
Douglas-Cowie, E ;
Tsapatsoulis, N ;
Votsis, G ;
Kollias, S ;
Fellenz, W ;
Taylor, JG .
IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) :32-80