A Review on Speech Emotion Recognition Using Deep Learning and Attention Mechanism

被引:107
作者
Lieskovska, Eva [1 ]
Jakubec, Maros [1 ]
Jarina, Roman [1 ]
Chmulik, Michal [1 ]
机构
[1] Univ Zilina, Fac Elect Engn & Informat Technol, Univ 8215-1, Zilina 01026, Slovakia
关键词
speech emotion recognition; deep learning; attention mechanism; recurrent neural network; long short-term memory; DATA AUGMENTATION; NEURAL-NETWORKS; FEATURES; AUDIO; CLASSIFIERS; PARAMETERS; DOMINANCE; DATABASES; AROUSAL; MODEL;
D O I
10.3390/electronics10101163
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Emotions are an integral part of human interactions and are significant factors in determining user satisfaction or customer opinion. speech emotion recognition (SER) modules also play an important role in the development of human-computer interaction (HCI) applications. A tremendous number of SER systems have been developed over the last decades. Attention-based deep neural networks (DNNs) have been shown as suitable tools for mining information that is unevenly time distributed in multimedia content. The attention mechanism has been recently incorporated in DNN architectures to emphasise also emotional salient information. This paper provides a review of the recent development in SER and also examines the impact of various attention mechanisms on SER performance. Overall comparison of the system accuracies is performed on a widely used IEMOCAP benchmark database.
引用
收藏
页数:29
相关论文
共 112 条
[71]   Modeling the Temporal Evolution of Acoustic Parameters for Speech Emotion Recognition [J].
Ntalampiras, Stavros ;
Fakotakis, Nikos .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2012, 3 (01) :116-125
[72]  
Nwe TL, 2003, INT CONF ACOUST SPEE, P9
[73]   A Chatbot for Psychiatric Counseling in Mental Healthcare Service Based on Emotional Dialogue Analysis and Sentence Generation [J].
Oh, Kyo-Joong ;
Lee, DongKun ;
Ko, ByungSoo ;
Choi, Ho-Jin .
2017 18TH IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (IEEE MDM 2017), 2017, :371-+
[74]  
Papakostas M, 2017, COMPUTATION, V5, DOI 10.3390/computation5020026
[75]   Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning [J].
Parthasarathy, Srinivas ;
Busso, Carlos .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :1103-1107
[76]   Significance of incorporating excitation source parameters for improved emotion recognition from speech and electroglottographic signals [J].
Pravena D. ;
Govind D. .
International Journal of Speech Technology, 2017, 20 (04) :787-797
[77]  
Ringeval F., 2015, P INT WORKSH AUD VIS
[78]  
Ringeval F, 2013, IEEE INT CONF AUTOMA
[79]  
Sahu S., 2018, ARXIV180606626CS
[80]   Emotion detection from text and speech: a survey [J].
Sailunaz K. ;
Dhaliwal M. ;
Rokne J. ;
Alhajj R. .
Social Network Analysis and Mining, 2018, 8 (01)