Adversarial Auto-encoders for Speech Based Emotion Recognition

被引:34
作者
Sahu, Saurabh [1 ]
Gupta, Rahul [2 ]
Sivaraman, Ganesh [1 ]
AbdAlmageed, Wael [3 ,4 ]
Espy-Wilson, Carol [1 ]
机构
[1] Univ Maryland, Speech Commun Lab, College Pk, MD 20742 USA
[2] Amazon Com, Seattle, WA USA
[3] Voice Vibes, Marriottsville, MD USA
[4] USC, Informat Sci Inst, Los Angeles, CA USA
来源
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION | 2017年
关键词
Adversarial auto-encoders; speech based emotion recognition; FEATURES;
D O I
10.21437/Interspeech.2017-1421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, generative adversarial networks and adversarial auto encoders have gained a lot of attention in machine learning community due to their exceptional performance in tasks such as digit classification and face recognition. They map the auto encoder's bottleneck layer output (termed as code vectors) to different noise Probability Distribution Functions (PDFs), that can be further regularized to cluster based on class information. In addition, they also allow a generation of synthetic samples by sampling the code vectors from the mapped PDFs. Inspired by these properties, we investigate the application of adversarial auto-encoders to the domain of emotion recognition. Specifically, we conduct experiments on the following two aspects: (i) their ability to encode high dimensional feature vector representations for emotional utterances into a compressed space (with a minimal loss of emotion class discriminability in the compressed space), and (ii) their ability to regenerate synthetic samples in the original feature space, to be later used for purposes such as training emotion recognition classifiers. We demonstrate promise of adversarial auto-encoders with regards to these aspects on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus and present our analysis.
引用
收藏
页码:1243 / 1247
页数:5
相关论文
共 30 条
[1]  
[Anonymous], 2004, 6 INT C MULTIMODAL I
[2]  
[Anonymous], 2013, Proceedings of the 21st ACM International Conference on Multimedia, DOI DOI 10.1145/2502081.2502224
[3]  
[Anonymous], 2015, P INT C LEARN REPR
[4]  
[Anonymous], 2013, ANALES 15 REUNION PR
[5]  
[Anonymous], P INT
[6]  
Baldi P., 2012, JMLR WORKSHOP C P, P37
[7]   The role of intonation in emotional expressions [J].
Bänziger, T ;
Scherer, KR .
SPEECH COMMUNICATION, 2005, 46 (3-4) :252-267
[8]   Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features [J].
Bone, Daniel ;
Lee, Chi-Chun ;
Narayanan, Shrikanth .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (02) :201-213
[9]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[10]   Emotion recognition in human-computer interaction [J].
Cowie, R ;
Douglas-Cowie, E ;
Tsapatsoulis, N ;
Votsis, G ;
Kollias, S ;
Fellenz, W ;
Taylor, JG .
IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) :32-80