Improving Speech Emotion Recognition With Adversarial Data Augmentation Network

被引:67
作者
Yi, Lu [1 ]
Mak, Man-Wai [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
关键词
Generators; Feature extraction; Training; Emotion recognition; Speech recognition; Generative adversarial networks; Gallium nitride; Data augmentation; generative adversarial networks (GANs); speech emotion recognition; Wasserstein divergence; NEURAL-NETWORKS; MODEL;
D O I
10.1109/TNNLS.2020.3027600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When training data are scarce, it is challenging to train a deep neural network without causing the overfitting problem. For overcoming this challenge, this article proposes a new data augmentation network-namely adversarial data augmentation network (ADAN)- based on generative adversarial networks (GANs). The ADAN consists of a GAN, an autoencoder, and an auxiliary classifier. These networks are trained adversarially to synthesize class-dependent feature vectors in both the latent space and the original feature space, which can be augmented to the real training data for training classifiers. Instead of using the conventional cross-entropy loss for adversarial training, the Wasserstein divergence is used in an attempt to produce high-quality synthetic samples. The proposed networks were applied to speech emotion recognition using EmoDB and IEMOCAP as the evaluation data sets. It was found that by forcing the synthetic latent vectors and the real latent vectors to share a common representation, the gradient vanishing problem can be largely alleviated. Also, results show that the augmented data generated by the proposed networks are rich in emotion information. Thus, the resulting emotion classifiers are competitive with state-of-the-art speech emotion recognition systems.
引用
收藏
页码:172 / 184
页数:13
相关论文
共 59 条
  • [1] Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning
    Alipanahi, Babak
    Delong, Andrew
    Weirauch, Matthew T.
    Frey, Brendan J.
    [J]. NATURE BIOTECHNOLOGY, 2015, 33 (08) : 831 - +
  • [2] [Anonymous], 2017, P INT C LEARNING RE
  • [3] Antoniou A., 2017, ARXIV171104340
  • [4] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [5] Class-level spectral features for emotion recognition
    Bitouk, Dmitri
    Verma, Ragini
    Nenkova, Ani
    [J]. SPEECH COMMUNICATION, 2010, 52 (7-8) : 613 - 625
  • [6] Burkhardt F., 2005, P 9 EUR C SPEECH COM, V5, P1517, DOI DOI 10.21437/INTERSPEECH.2005-446
  • [7] IEMOCAP: interactive emotional dyadic motion capture database
    Busso, Carlos
    Bulut, Murtaza
    Lee, Chi-Chun
    Kazemzadeh, Abe
    Mower, Emily
    Kim, Samuel
    Chang, Jeannette N.
    Lee, Sungbok
    Narayanan, Shrikanth S.
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) : 335 - 359
  • [8] Affective Computing and Sentiment Analysis
    Cambria, Erik
    [J]. IEEE INTELLIGENT SYSTEMS, 2016, 31 (02) : 102 - 107
  • [9] Fuzzy commonsense reasoning for multimodal sentiment analysis
    Chaturvedi, Iti
    Satapathy, Ranjan
    Cavallari, Sandro
    Cambria, Erik
    [J]. PATTERN RECOGNITION LETTERS, 2019, 125 : 264 - 270
  • [10] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)