A STUDY ON CROSS-CORPUS SPEECH EMOTION RECOGNITION AND DATA AUGMENTATION

被引:7
作者
Braunschweiler, Norbert [1 ]
Doddipatla, Rama [1 ]
Keizer, Simon [1 ]
Stoyanchev, Svetlana [1 ]
机构
[1] Cambridge Res Lab, Toshiba Europe Ltd, Cambridge CB4 0GZ, England
来源
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU) | 2021年
关键词
speech emotion recognition; cross-corpus; data augmentation; CNN-RNN bi-directional LSTM; deep learning;
D O I
10.1109/ASRU51503.2021.9687987
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Models that can handle a wide range of speakers and acoustic conditions are essential in speech emotion recognition (SER). Often, these models tend to show mixed results when presented with speakers or acoustic conditions that were not visible during training. This paper investigates the impact of cross-corpus data complementation and data augmentation on the performance of SER models in matched (test-set from same corpus) and mismatched (test-set from different corpus) conditions. Investigations using six emotional speech corpora that include single and multiple speakers as well as variations in emotion style (acted, elicited, natural) and recording conditions are presented. Observations show that, as expected, models trained on single corpora perform best in matched conditions while performance decreases between 10-40% in mismatched conditions, depending on corpus specific features. Models trained on mixed corpora can be more stable in mismatched contexts, and the performance reductions range from 1 to 8% when compared with single corpus models in matched conditions. Data augmentation yields additional gains up to 4% and seem to benefit mismatched conditions more than matched ones.
引用
收藏
页码:24 / 30
页数:7
相关论文
共 25 条
[1]  
Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/3022670.2976746, 10.1145/2951913.2976746]
[2]   Domain Adversarial for Acoustic Emotion Recognition [J].
Abdelwahab, Mohammed ;
Busso, Carlos .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (12) :2423-2435
[3]   Demonstrating and modelling systematic time-varying annotator disagreement in continuous emotion annotation [J].
Atcheson, Mia ;
Sethu, Vidhyasaharan ;
Epps, Julien .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :3668-3672
[4]  
Bahdanau D., 2015, PROC INT C LEARN REP
[5]  
Beard R., 2018, P 22 C COMP NAT LANG, P251
[6]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[7]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[8]  
Huang CL, 2019, 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), P291, DOI [10.1109/ASRU46091.2019.9003938, 10.1109/asru46091.2019.9003938]
[9]   Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition [J].
Jalal, Md Asif ;
Milner, Rosanna ;
Hain, Thomas .
INTERSPEECH 2020, 2020, :4113-4117
[10]  
Kingma D. P., 2015, PROC 3TH INT C LEARN