An Attention Pooling based Representation Learning Method for Speech Emotion Recognition

被引:118
作者
Li, Pengcheng [1 ]
Song, Yan [1 ]
McLoughlin, Ian [2 ]
Guo, Wu [1 ]
Dai, Lirong [1 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Speech & Language Informat Proc, Hefei, Anhui, Peoples R China
[2] Univ Kent, Sch Comp, Medway, England
来源
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES | 2018年
基金
中国国家自然科学基金;
关键词
speech emotion recognition; high-level feature learning; convolutional neural network; second-order pooling;
D O I
10.21437/Interspeech.2018-1242
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes an attention pooling based representation learning method for speech emotion recognition (SER). The emotional representation is learned in an end-to-end fashion by applying a deep convolutional neural network (CNN) directly to spectrograms extracted from speech utterances. Motivated by the success of GoogLeNet, two groups of filters with different shapes are designed to capture both temporal and frequency domain context information from the input spectrogram. The learned features are concatenated and fed into the subsequent convolutional layers. To learn the final emotional representation, a novel attention pooling method is further proposed. Compared with the existing pooling methods, such as max-pooling and average-pooling, the proposed attention pooling can effectively incorporate class-agnostic bottom-up, and class-specific top-down, attention maps. We conduct extensive evaluations on benchmark IEMOCAP data to assess the effectiveness of the proposed representation. Results demonstrate a recognition performance of 71.8% weighted accuracy (WA) and 68% unweighted accuracy (UA) over four emotions, which outperforms the state-of-the-art method by about 3% absolute for WA and 4% for UA.
引用
收藏
页码:3087 / 3091
页数:5
相关论文
共 26 条
[1]  
[Anonymous], INT CONF ACOUST SPEE
[2]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[3]  
[Anonymous], PROC CVPR IEEE
[4]  
[Anonymous], 2017, NIPS WORKSH
[5]  
[Anonymous], 2013, FOUND TRENDS SIGNAL, DOI DOI 10.1561/2000000039
[6]  
[Anonymous], INT CONF ACOUST SPEE
[7]  
[Anonymous], 2015, INT C COMP VIS ICCV
[8]  
[Anonymous], P INT
[9]  
[Anonymous], INT CONF ACOUST SPEE
[10]  
[Anonymous], INT CONF ACOUST SPEE