MULTI-CONDITIONING AND DATA AUGMENTATION USING GENERATIVE NOISE MODEL FOR SPEECH EMOTION RECOGNITION IN NOISY CONDITIONS

被引:0
作者
Tiwari, Upasana [1 ]
Soni, Meet [1 ]
Chakraborty, Rupayan [1 ]
Panda, Ashish [1 ]
Kopparapu, Sunil Kumar [1 ]
机构
[1] TCS Res & Innovat, Mumbai, Maharashtra, India
来源
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING | 2020年
关键词
Speech emotion recognition; noise robustness; generative noise model; multi conditioning; deep neural network;
D O I
10.1109/icassp40776.2020.9053581
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Degradation due to additive noise is a significant road block in the real-life deployment of Speech Emotion Recognition (SER) systems. Most of the previous work in this field dealt with the noise degradation either at the signal or at the feature level. In this paper, to address the robustness aspect of the SER in additive noise scenarios, we propose multi-conditioning and data augmentation using an utterance level parametric Generative noise model. The Generative noise model is designed to generate noise types which can span the entire noise space in the mel-filterbank energy domain. This characteristic of the model renders the system robust against unseen noise conditions. The generated noise types can be used to create multi-conditioned data for training the SER systems. Multi-conditioning approach can also be used to increase the training data by many folds where such data is limited. We report the performance of the proposed method on two datasets, namely EmoDB and IEMOCAP. We also explore multi-conditioning and data augmentation using noise samples from NOISEX-92 database.
引用
收藏
页码:7194 / 7198
页数:5
相关论文
共 18 条
  • [1] Robust emotional speech recognition based on binaural model and emotional auditory mask in noisy environments
    Bashirpour, Meysam
    Geravanchizadeh, Masoud
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2018,
  • [2] Burkhardt F., 2005, P 9 EUR C SPEECH COM, VVolume 5, P1517, DOI DOI 10.21437/INTERSPEECH.2005-446
  • [3] IEMOCAP: interactive emotional dyadic motion capture database
    Busso, Carlos
    Bulut, Murtaza
    Lee, Chi-Chun
    Kazemzadeh, Abe
    Mower, Emily
    Kim, Samuel
    Chang, Jeannette N.
    Lee, Sungbok
    Narayanan, Shrikanth S.
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) : 335 - 359
  • [4] Front-end Feature Compensation and Denoising for Noise Robust Speech Emotion Recognition
    Chakraborty, Rupayan
    Panda, Ashish
    Pandharipande, Meghna
    Joshi, Sonal
    Kopparapu, Sunil Kumar
    [J]. INTERSPEECH 2019, 2019, : 3257 - 3261
  • [5] Data Augmentation using GANs for Speech Emotion Recognition
    Chatziagapi, Aggelina
    Paraskevopoulos, Georgios
    Sgouropoulos, Dimitris
    Pantazopoulos, Georgios
    Nikandrou, Malvina
    Giannakopoulos, Theodoros
    Katsamanis, Athanasios
    Potamianos, Alexandros
    Narayanan, Shrikanth
    [J]. INTERSPEECH 2019, 2019, : 171 - 175
  • [6] Georgogiannis A, 2012, EUR SIGNAL PR CONF, P2045
  • [7] Heracleous P, 2017, INT CONF AFFECT, P262, DOI 10.1109/ACII.2017.8273610
  • [8] Speech Emotion Recognition under White Noise
    Huang, Chengwei
    Chen, Guoming
    Yu, Hua
    Bao, Yongqiang
    Zhao, Li
    [J]. ARCHIVES OF ACOUSTICS, 2013, 38 (04) : 457 - 463
  • [9] Joshi Sonal, 2019, TEXT SPEECH DIALOGUE
  • [10] Improving Noise Robustness of Speech Emotion Recognition System
    Juszkiewicz, Lukasz
    [J]. INTELLIGENT DISTRIBUTED COMPUTING VII, 2014, 511 : 223 - 232