Research on speech emotion recognition algorithm for unbalanced data set

被引:0
作者
Liang Z. [1 ]
Li X. [1 ]
Song W. [1 ]
机构
[1] Electronic Information Engineering, Changchun University of Science and Technology, Jilin Province
关键词
CRNN; focal loss; spectrograms; Speech emotion recognition;
D O I
10.3233/JIFS-191129
中图分类号
学科分类号
摘要
In speech emotion recognition, most emotional corpora generally have problems such as inconsistent sample length and imbalance of sample categories. Considering these problems, in this paper, a variable length input CRNN deep learning model based on Focal Loss is proposed for speech emotion recognition of anger, happiness, neutrality and sadness in IEMOCAP emotional corpus. In this model, Firstly, a variable-length strategy is introduced to input the speech spectra of the filled speech samples into CNN. Then the effective part of the input sequence is preserved and output by masking matrix and convolution layer. Thirdly, the effective output of input sequence is input into BiGRU network for learning. Finally, the focal loss is used for network training to control and adjust the contribution of various samples to the total loss. Compared with the traditional speech emotion recognition model, simulations show that our method can effectively improve the accuracy and performance of emotion recognition. © 2020 - IOS Press and the authors. All rights reserved.
引用
收藏
页码:2791 / 2796
页数:5
相关论文
共 50 条
[31]   Improving Speech Emotion Recognition With Adversarial Data Augmentation Network [J].
Yi, Lu ;
Mak, Man-Wai .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (01) :172-184
[32]   A Data Augmentation Approach for Improving the Performance of Speech Emotion Recognition [J].
Paraskevopoulou, Georgia ;
Spyrou, Evaggelos ;
Perantonis, Stavros .
SIGMAP: PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MULTIMEDIA APPLICATIONS, 2022, :61-69
[33]   Analyzing the influence of different speech data corpora and speech features on speech emotion recognition: A review [J].
Rathi, Tarun ;
Tripathy, Manoj .
SPEECH COMMUNICATION, 2024, 162
[34]   Speech emotion recognition based on an improved supervised manifold learning algorithm [J].
Zhang S.-Q. ;
Li L.-M. ;
Zhao Z.-J. .
Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2010, 32 (11) :2724-2729
[35]   Discriminative Feature Learning for Speech Emotion Recognition [J].
Zhang, Yuying ;
Zou, Yuexian ;
Peng, Junyi ;
Luo, Danqing ;
Huang, Dongyan .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: TEXT AND TIME SERIES, PT IV, 2019, 11730 :198-210
[36]   Gender-Driven English Speech Emotion Recognition with Genetic Algorithm [J].
Yue, Liya ;
Hu, Pei ;
Zhu, Jiulong .
BIOMIMETICS, 2024, 9 (06)
[37]   CycleGAN-based Emotion Style Transfer as Data Augmentation for Speech Emotion Recognition [J].
Bao, Fang ;
Neumann, Michael ;
Ngoc Thang Vu .
INTERSPEECH 2019, 2019, :2828-2832
[38]   Speech emotion recognition based on emotion perception [J].
Liu, Gang ;
Cai, Shifang ;
Wang, Ce .
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2023, 2023 (01)
[39]   Autoencoder With Emotion Embedding for Speech Emotion Recognition [J].
Zhang, Chenghao ;
Xue, Lei .
IEEE ACCESS, 2021, 9 :51231-51241
[40]   Speech emotion recognition based on emotion perception [J].
Gang Liu ;
Shifang Cai ;
Ce Wang .
EURASIP Journal on Audio, Speech, and Music Processing, 2023