The Emotion Probe: On the Universality of Cross-Linguistic and Cross-Gender Speech Emotion Recognition via Machine Learning

被引:20
作者
Costantini, Giovanni [1 ]
Parada-Cabaleiro, Emilia [2 ]
Casali, Daniele [1 ]
Cesarini, Valerio [1 ]
机构
[1] Univ Roma Tor Vergata, Dept Elect Engn, I-00133 Rome, Italy
[2] Johannes Kepler Univ Linz, Inst Computat Percept, A-4040 Linz, Austria
关键词
speech; emotion recognition; artificial intelligence; English; cross-linguistic; cross-gender; SVM; machine learning; SER; MOOD STATES; PERCEPTION;
D O I
10.3390/s22072461
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Machine Learning (ML) algorithms within a human-computer framework are the leading force in speech emotion recognition (SER). However, few studies explore cross-corpora aspects of SER; this work aims to explore the feasibility and characteristics of a cross-linguistic, cross-gender SER. Three ML classifiers (SVM, Naive Bayes and MLP) are applied to acoustic features, obtained through a procedure based on Kononenko's discretization and correlation-based feature selection. The system encompasses five emotions (disgust, fear, happiness, anger and sadness), using the Emofilm database, comprised of short clips of English movies and the respective Italian and Spanish dubbed versions, for a total of 1115 annotated utterances. The results see MLP as the most effective classifier, with accuracies higher than 90% for single-language approaches, while the cross-language classifier still yields accuracies higher than 80%. The results show cross-gender tasks to be more difficult than those involving two languages, suggesting greater differences between emotions expressed by male versus female subjects than between different languages. Four feature domains, namely, RASTA, F0, MFCC and spectral energy, are algorithmically assessed as the most effective, refining existing literature and approaches based on standard sets. To our knowledge, this is one of the first studies encompassing cross-gender and cross-linguistic assessments on SER.
引用
收藏
页数:17
相关论文
共 72 条
[1]  
Aftab A., 2021, ARXIV
[2]  
Al Dujaili M. J., 2021, Int. J. Elect. Comput. Eng., V11, P1259, DOI 10.11591/ijece.v11i2.pp1259-1264
[3]   New approach in quantification of emotional intensity from the speech signal: emotional temperature [J].
Alonso, Jesus B. ;
Cabrera, Josue ;
Medina, Manuel ;
Travieso, Carlos M. .
EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (24) :9554-9564
[4]  
[Anonymous], 1908, BIOMETRIKA, V6, P1
[5]  
[Anonymous], 1997, P 5 EUROPEAN C SPEEC, DOI DOI 10.21437/EUROSPEECH.1997-494
[6]  
[Anonymous], 1993, Handbook of emotions
[7]  
[Anonymous], 2019, Social Media and Machine Learning, DOI [DOI 10.5772/INTECHOPEN.84856, 10.5772/intechopen.84856]
[8]   Acoustic profiles in vocal emotion expression [J].
Banse, R ;
Scherer, KR .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1996, 70 (03) :614-636
[9]  
Benavides L, 2018, DUBBING MOVIES SPANI
[10]  
Bimbot F., 2013, P INTERSPEECH 2013 1