Improving Speech Emotion Recognition With Adversarial Data Augmentation Network

被引:72
作者
Yi, Lu [1 ]
Mak, Man-Wai [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
关键词
Generators; Feature extraction; Training; Emotion recognition; Speech recognition; Generative adversarial networks; Gallium nitride; Data augmentation; generative adversarial networks (GANs); speech emotion recognition; Wasserstein divergence; NEURAL-NETWORKS; MODEL;
D O I
10.1109/TNNLS.2020.3027600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When training data are scarce, it is challenging to train a deep neural network without causing the overfitting problem. For overcoming this challenge, this article proposes a new data augmentation network-namely adversarial data augmentation network (ADAN)- based on generative adversarial networks (GANs). The ADAN consists of a GAN, an autoencoder, and an auxiliary classifier. These networks are trained adversarially to synthesize class-dependent feature vectors in both the latent space and the original feature space, which can be augmented to the real training data for training classifiers. Instead of using the conventional cross-entropy loss for adversarial training, the Wasserstein divergence is used in an attempt to produce high-quality synthetic samples. The proposed networks were applied to speech emotion recognition using EmoDB and IEMOCAP as the evaluation data sets. It was found that by forcing the synthetic latent vectors and the real latent vectors to share a common representation, the gradient vanishing problem can be largely alleviated. Also, results show that the augmented data generated by the proposed networks are rich in emotion information. Thus, the resulting emotion classifiers are competitive with state-of-the-art speech emotion recognition systems.
引用
收藏
页码:172 / 184
页数:13
相关论文
共 59 条
[1]   Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning [J].
Alipanahi, Babak ;
Delong, Andrew ;
Weirauch, Matthew T. ;
Frey, Brendan J. .
NATURE BIOTECHNOLOGY, 2015, 33 (08) :831-+
[2]  
[Anonymous], 2013, ACM MULT C MM 13 BAR, DOI 10.1145/2502081.2502224
[3]  
[Anonymous], 2015, INT C LEARN REPR ICL
[4]  
[Anonymous], 2006, P ADV NEUR INF PROC
[5]  
Arjovsky M., 2017, P INT C LEARNING RE
[6]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[7]   Class-level spectral features for emotion recognition [J].
Bitouk, Dmitri ;
Verma, Ragini ;
Nenkova, Ani .
SPEECH COMMUNICATION, 2010, 52 (7-8) :613-625
[8]  
Burkhardt F, 2005, INTERSPEECH, P1517, DOI DOI 10.21437/INTERSPEECH.2005-446
[9]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[10]   Affective Computing and Sentiment Analysis [J].
Cambria, Erik .
IEEE INTELLIGENT SYSTEMS, 2016, 31 (02) :102-107