Feature fusion methods research based on deep belief networks for speech emotion recognition under noise condition

被引:59
作者
Huang, Yongming [1 ,2 ]
Tian, Kexin [1 ,2 ]
Wu, Ao [1 ,2 ]
Zhang, Guobao [1 ,2 ]
机构
[1] Southeast Univ, Lab Measurement & Control Complex Syst Engn, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Automat, Minist Educ, Nanjing 210096, Jiangsu, Peoples R China
关键词
Speech emotion recognition; Weighted wavelet packets Cepstral coefficients (W-WPCC); Feature fusion; Deep belief networks (DBNs); CHINESE SPEECH; SVM; CLASSIFICATION;
D O I
10.1007/s12652-017-0644-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The speech emotion recognition accuracy of prosody feature and voice quality feature declines with the decrease of signal to noise ratio (SNR) of speech signals. In this paper, we propose novel sub-band spectral centroid weighted wavelet packet Cepstral coefficients (W-WPCC) for robust speech emotion recognition. The W-WPCC feature is computed by combining the sub-band energies with sub-band spectral centroids via a weighting scheme to generate noise-robust acoustic features. And deep belief networks (DBNs) are artificial neural networks having more than one hidden layer, which are first pre-trained layer by layer and then fine-tuned using back propagation algorithm. The well-trained deep neural networks are capable of modeling complex and non-linear features of input training data and can better predict the probability distribution over classification labels. We extracted prosody feature, voice quality features and wavelet packet Cepstral coefficients (WPCC) from the speech signals to combine with W-WPCC and fused them by DBNs. Experimental results on Berlin emotional speech database show that the proposed fused feature with W-WPCC is more suitable in speech emotion recognition under noisy conditions than other acoustics features and proposed DBNs feature learning structure combined with W-WPCC improve emotion recognition performance over the conventional emotion recognition method.
引用
收藏
页码:1787 / 1798
页数:12
相关论文
共 45 条
  • [31] Petrushin ValeryA., 2000, PROC ICSLP 2000, P222
  • [32] Sarikaya R, 1997, SOUTH 97 ENG NEW CEN, P92
  • [33] Schuller B, 2004, 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS, P577
  • [34] Within and cross-corpus speech emotion recognition using latent topic model-based features
    Shah, Mohit
    Chakrabarti, Chaitali
    Spanias, Andreas
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2015,
  • [35] An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech
    Shami, Mohammad
    Verhelst, Werner
    [J]. SPEECH COMMUNICATION, 2007, 49 (03) : 201 - 212
  • [36] Cross-Corpus Experiments on Laughter and Emotion Detection in HRI with Elderly People
    Tahon, Marie
    Sehili, Mohamed A.
    Devillers, Laurence
    [J]. SOCIAL ROBOTICS (ICSR 2015), 2015, 9388 : 633 - 642
  • [37] Towards a Small Set of Robust Acoustic Features for Emotion Recognition: Challenges
    Tahon, Marie
    Devillers, Laurence
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2016, 24 (01) : 16 - 28
  • [38] Vlasenko B, 2007, LECT NOTES COMPUT SC, V4738, P139
  • [39] Wang XZ, 2004, LECT NOTES COMPUT SC, V3213, P1037
  • [40] Emotion recognition from noisy speech
    You, Mingyu
    Chen, Chun
    Bu, Jiajun
    Liu, Jia
    Tao, Jianhua
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 1653 - +