Feature Pooling of Modulation Spectrum Features for Improved Speech Emotion Recognition in the Wild

被引:28
作者
Avila, Anderson R. [1 ]
Akhtar, Zahid [1 ]
Santos, Joao F. [1 ]
O'Shaughnessy, Douglas [1 ]
Falk, Tiago H. [1 ]
机构
[1] INRS EMT, Telecommun, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会; 欧盟地平线“2020”;
关键词
Affective computing; speech emotion recognition; modulation spectrum; in-the-wild; NEURAL-NETWORKS; FREQUENCY;
D O I
10.1109/TAFFC.2018.2858255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interest in affective computing is burgeoning, in great part due to its role in emerging affective human-computer interfaces (HCI). To date, the majority of existing research on automated emotion analysis has relied on data collected in controlled environments. With the rise of HCI applications on mobile devices, however, so-called "in-the-wild" settings have posed a serious threat for emotion recognition systems, particularly those based on voice. In this case, environmental factors such as ambient noise and reverberation severely hamper system performance. In this paper, we quantify the detrimental effects that the environment has on emotion recognition and explore the benefits achievable with speech enhancement. Moreover, we propose a modulation spectral feature pooling scheme that is shown to outperform a state-of-the-art benchmark system for environment-robust prediction of spontaneous arousal and valence emotional primitives. Experiments on an environment-corrupted version of the RECOLA dataset of spontaneous interactions show the proposed feature pooling scheme, combined with speech enhancement, outperforming the benchmark across different noise-only, reverberation-only and noise-plus-reverberation conditions. Additional tests with the SEWA database show the benefits of the proposed method for in-the-wild applications.
引用
收藏
页码:177 / 188
页数:12
相关论文
共 44 条
[1]  
[Anonymous], 2007, ROBUST SPEECH RECOGN
[2]  
[Anonymous], 2007, Robust Speech Recognition and Understanding
[3]  
[Anonymous], 2000, P AUT SPEECH REC CHA
[4]  
[Anonymous], 2012, ABS12070580 CORR
[5]  
Bostrom N, 2014, CAMBRIDGE HANDBOOK OF ARTIFICIAL INTELLIGENCE, P316
[6]  
Braun S., 2016, Proceedings of IWAENC, P1, DOI DOI 10.1109/IWAENC.2016.7602930
[7]   Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection [J].
Busso, Carlos ;
Lee, Sungbok ;
Narayanan, Shrikanth .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2009, 17 (04) :582-596
[8]   Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015 [J].
Dhall, Abhinav ;
Murthy, O. V. Ramana ;
Goecke, Roland ;
Joshi, Jyoti ;
Gedeon, Tom .
ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, :423-426
[9]   Survey on speech emotion recognition: Features, classification schemes, and databases [J].
El Ayadi, Moataz ;
Kamel, Mohamed S. ;
Karray, Fakhri .
PATTERN RECOGNITION, 2011, 44 (03) :572-587
[10]   Characterizing frequency selectivity for envelope fluctuations [J].
Ewert, SD ;
Dau, T .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2000, 108 (03) :1181-1196