Autoencoder With Emotion Embedding for Speech Emotion Recognition

被引:29
作者
Zhang, Chenghao [1 ]
Xue, Lei [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
关键词
Feature extraction; Speech recognition; Emotion recognition; Spectrogram; Noise reduction; Hidden Markov models; Acoustics; Speech emotion recognition; autoencoder; emotion embedding; instance normalization; GENERATION;
D O I
10.1109/ACCESS.2021.3069818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An important part of the human-computer interaction process is speech emotion recognition (SER), which has been receiving more attention in recent years. However, although a wide diversity of methods has been proposed in SER, these approaches still cannot improve the performance. A key issue in the low performance of the SER system is how to effectively extract emotion-oriented features. In this paper, we propose a novel algorithm, an autoencoder with emotion embedding, to extract deep emotion features. Unlike many previous works, instance normalization, which is a common technique in the style transfer field, is introduced into our model rather than batch normalization. Furthermore, the emotion embedding path in our method can lead the autoencoder to efficiently learn a priori knowledge from the label. It can enable the model to distinguish which features are most related to human emotion. We concatenate the latent representation learned by the autoencoder and acoustic features obtained by the openSMILE toolkit. Finally, the concatenated feature vector is utilized for emotion classification. To improve the generalization of our method, a simple data augmentation approach is applied. Two publicly available and highly popular databases, IEMOCAP and EMODB, are chosen to evaluate our method. Experimental results demonstrate that the proposed model achieves significant performance improvement compared to other speech emotion recognition systems.
引用
收藏
页码:51231 / 51241
页数:11
相关论文
共 72 条
[41]  
Pal A, 2015, 2015 IEEE INTERNATIONAL CONFERENCE ON ENGINEERING AND TECHNOLOGY (ICETECH), P124
[42]   Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net [J].
Pan, Xingang ;
Luo, Ping ;
Shi, Jianping ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :484-500
[43]   SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition [J].
Park, Daniel S. ;
Chan, William ;
Zhang, Yu ;
Chiu, Chung-Cheng ;
Zoph, Barret ;
Cubuk, Ekin D. ;
Le, Quoc, V .
INTERSPEECH 2019, 2019, :2613-2617
[44]  
Poria S, 2016, IEEE DATA MINING, P439, DOI [10.1109/ICDM.2016.178, 10.1109/ICDM.2016.0055]
[45]  
Radford Alec, 2018, Improving language understanding by generative pre-training
[46]  
Ragni A, 2014, INTERSPEECH, P810
[47]  
Rawat A., 2015, International Journal of Advanced Research in Computer Science and Software Engineering, V5, P422
[48]  
Rozgic V, 2012, ASIAPAC SIGN INFO PR
[49]   LEARNING REPRESENTATIONS BY BACK-PROPAGATING ERRORS [J].
RUMELHART, DE ;
HINTON, GE ;
WILLIAMS, RJ .
NATURE, 1986, 323 (6088) :533-536
[50]  
Runnan Li, 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Proceedings, P6675, DOI 10.1109/ICASSP.2019.8682154