Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition

被引:59
作者
Latif, Siddique [1 ,2 ]
Rana, Rajib [1 ]
Khalifa, Sara [2 ,3 ,4 ]
Jurdak, Raja [5 ,6 ]
Epps, Julien [7 ]
Schuller, Bjoern W. [8 ,9 ]
机构
[1] Univ Southerns Queensland USQ, Springfield, Qld 4300, Australia
[2] CSIRO, Data61, Distributed Sensing Syst Grp, Pullenvale, Qld 4069, Australia
[3] Univ New South Wales, Sydney, NSW 2052, Australia
[4] Univ Queensland UQ, St Lucia, Qld 4072, Australia
[5] Queensland Univ Technol QUT, Brisbane, Qld 4000, Australia
[6] CSIROs Data61, Pullenvale, Qld 4068, Australia
[7] Univ New South Wales UNSW, Sydney, NSW 2052, Australia
[8] Imperial Coll London, GLAM Grp Language Audio & Mus, London SW7 2AZ, England
[9] Univ Augsburg, Chair Embedded Intelligence Hlth Care & Wellbeing, D-86159 Augsburg, Germany
关键词
Task analysis; Emotion recognition; Speech recognition; Hidden Markov models; Semisupervised learning; Training; Australia; Speech emotion recognition; multi task learning; representation learning; CATEGORICAL EMOTIONS; IMPROVING SPEECH; CLASSIFICATION; CORPUS; FRAMEWORK; NETWORKS; FEATURES; MODEL;
D O I
10.1109/TAFFC.2020.2983669
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspite the emerging importance of Speech Emotion Recognition (SER), the state-of-the-art accuracy is quite low and needs improvement to make commercial applications of SER viable. A key underlying reason for the low accuracy is the scarcity of emotion datasets, which is a challenge for developing any robust machine learning model in general. In this article, we propose a solution to this problem: a multi-task learning framework that uses auxiliary tasks for which data is abundantly available. We show that utilisation of this additional data can improve the primary task of SER for which only limited labelled data is available. In particular, we use gender identifications and speaker recognition as auxiliary tasks, which allow the use of very large datasets, e. g., speaker classification datasets. To maximise the benefit of multi-task learning, we further use an adversarial autoencoder (AAE) within our framework, which has a strong capability to learn powerful and discriminative features. Furthermore, the unsupervised AAE in combination with the supervised classification networks enables semi-supervised learning which incorporates a discriminative component in the AAE unsupervised training pipeline. This semi-supervised learning essentially helps to improve generalisation of our framework and thus leads to improvements in SER performance. The proposed model is rigorously evaluated for categorical and dimensional emotion, and cross-corpus scenarios. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on two publicly available datasets.
引用
收藏
页码:992 / 1004
页数:13
相关论文
共 108 条
[1]   Emotion Recognition in Speech using Cross-Modal Transfer in the Wild [J].
Albanie, Samuel ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :292-301
[2]  
[Anonymous], 2015, CoRR abs/1511.05644, Patent No. 151105644
[3]  
[Anonymous], 2008, P 25 INT C MACH LEAR, DOI DOI 10.1145/1390156.1390177
[4]  
[Anonymous], 1989, NEURIPS
[5]  
Bao F., 2019, P INT, P2828
[6]   A model of inductive bias learning [J].
Baxter, J .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2000, 12 :149-198
[7]   Exploiting task relatedness for multiple task learning [J].
Ben-David, S ;
Schuller, R .
LEARNING THEORY AND KERNEL MACHINES, 2003, 2777 :567-580
[8]  
Bérard A, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P6224, DOI 10.1109/ICASSP.2018.8461690
[9]  
Burkhardt F, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, P1053
[10]  
Burkhardt F, 2005, INTERSPEECH, P1517, DOI DOI 10.21437/INTERSPEECH.2005-446