End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

被引:414
作者
Tzirakis, Panagiotis [1 ]
Trigeorgis, George [1 ]
Nicolaou, Mihalis A. [2 ]
Schuller, Bjorn W. [3 ,4 ]
Zafeiriou, Stefanos [1 ,5 ]
机构
[1] Imperial Coll London, Dept Comp, London SW7 2AZ, England
[2] Goldsmiths Univ London, Dept Comp, London SE14 6NW, England
[3] Imperial Coll London, Grp Language Audio & Mus, London SW7 2AZ, England
[4] Univ Augsburg, Embedded Intelligence Hlth Care andWellbeing, D-86159 Augsburg, Germany
[5] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
基金
英国工程与自然科学研究理事会;
关键词
End-to-end learning; emotion recognition; deep learning; affective computing; SPEECH; FEATURES;
D O I
10.1109/JSTSP.2017.2764438
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human-computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a convolutional neural network (CNN) to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, long short-term memory networks are utilized. The system is then trained in an end-toend fashion where-by also taking advantage of the correlations of each of the streams-we manage to significantly outperform, in terms of concordance correlation coefficient, traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.
引用
收藏
页码:1301 / 1309
页数:9
相关论文
共 49 条
[1]   Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011 [J].
Anagnostopoulos, Christos-Nikolaos ;
Iliou, Theodoros ;
Giannoukos, Ioannis .
ARTIFICIAL INTELLIGENCE REVIEW, 2015, 43 (02) :155-177
[2]  
[Anonymous], 2017, IEEE T ACTIONS IMAGE
[3]  
[Anonymous], 2015, P AVEC15 BRISB AUSTR
[4]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.465
[5]   Multi-Modal Audio, Video and Physiological Sensor Learning for Continuous Emotion Prediction [J].
Brady, Kevin ;
Gwon, Youngjune ;
Khorrami, Pooya ;
Godoy, Elizabeth ;
Campbell, William ;
Dagli, Charlie ;
Huang, Thomas S. .
PROCEEDINGS OF THE 6TH INTERNATIONAL WORKSHOP ON AUDIO/VISUAL EMOTION CHALLENGE (AVEC'16), 2016, :97-104
[6]  
Burkhardt F, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, P1053
[7]  
Cho K, 2014, ARXIV14061078, P1724, DOI [DOI 10.3115/V1/D14-1179, 10.3115/V1/D14-1179]
[8]  
Eyben F., 2010, P 18 ACM INT C MULT, P1459
[9]   The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing [J].
Eyben, Florian ;
Scherer, Klaus R. ;
Schuller, Bjoern W. ;
Sundberg, Johan ;
Andre, Elisabeth ;
Busso, Carlos ;
Devillers, Laurence Y. ;
Epps, Julien ;
Laukka, Petri ;
Narayanan, Shrikanth S. ;
Truong, Khiet P. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2016, 7 (02) :190-202
[10]  
Glorot X., 2010, P 13 INT C ART INT S, P249, DOI DOI 10.1109/LGRS.2016.2565705