Multi-type features separating fusion learning for Speech Emotion Recognition

被引:15
作者
Xu, Xinlei [1 ,2 ]
Li, Dongdong [2 ]
Zhou, Yijun [2 ]
Wang, Zhe [1 ,2 ]
机构
[1] East China Univ Sci Technol, Key Lab Smart Mfg Energy Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] East China Univ Sci & Technol, Dept Comp Sci & Engn, Shanghai 200237, Peoples R China
基金
中国国家自然科学基金;
关键词
Speech emotion recognition; Hybrid feature selection; Feature-level fusion; Speaker-independent; CONVOLUTIONAL NEURAL-NETWORKS; GMM; REPRESENTATIONS; CLASSIFICATION; ADAPTATION; RECURRENT; CNN;
D O I
10.1016/j.asoc.2022.109648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speech Emotion Recognition (SER) is a challengeable task to improve human-computer interaction. Speech data have different representations, and choosing the appropriate features to express the emotion behind the speech is difficult. The human brain can comprehensively judge the same thing in different dimensional representations to obtain the final result. Inspired by this, we believe that it is reasonable to have complementary advantages between different representations of speech data. Therefore, a Hybrid Deep Learning with Multi-type features Model (HD-MFM) is proposed to integrate the acoustic, temporal and image information of speech. Specifically, we utilize Convolutional Neural Network (CNN) to extract image information from the spectrogram of speech. Deep Neural Network (DNN) is used for extracting the acoustic information from the statistic features of speech. Then, Long Short-Term Memory (LSTM) is chosen to extract the temporal information from the Mel-Frequency Cepstral Coefficients (MFCC) of speech. Finally, three different types of speech features are concatenated together to get a richer emotion representation with better discriminative property. Considering that different fusion strategies affect the relationship between features, we consider two fusion strategies in this paper named separating and merging. To evaluate the feasibility and effectiveness of the proposed HD-MFM, we perform extensive experiments on EMO-DB and IEMOCAP of SER. The experimental results show that the separating method has more significant advantages in feature complementarity. The proposed HD-MFM obtains 91.25% and 72.02% results on EMO-DB and IEMOCAP. The obtained results indicate the proposed HD-MFM can make full use of the effective complementary feature representations by separating strategy to further enhance the performance of SER. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 69 条
  • [1] Convolutional Neural Networks for Speech Recognition
    Abdel-Hamid, Ossama
    Mohamed, Abdel-Rahman
    Jiang, Hui
    Deng, Li
    Penn, Gerald
    Yu, Dong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (10) : 1533 - 1545
  • [2] Alodia Yusuf S. A., 2019, 20196TH INT C INFORM, P1, DOI [10.1109/ICITACEE.2019.8904285, DOI 10.1109/ICITACEE.2019.8904285]
  • [3] Deep image captioning using an ensemble of CNN and LSTM based deep neural networks
    Alzubi, Jafar A.
    Jain, Rachna
    Nagrath, Preeti
    Satapathy, Suresh
    Taneja, Soham
    Gupta, Paras
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 40 (04) : 5761 - 5769
  • [4] Paraphrase identification using collaborative adversarial networks
    Alzubi, Jafar A.
    Jain, Rachna
    Kathuria, Abhishek
    Khandelwal, Anjali
    Saxena, Anmol
    Singh, Anubhav
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2020, 39 (01) : 1021 - 1032
  • [5] An optimal pruning algorithm of classifier ensembles: dynamic programming approach
    Alzubi, Omar A.
    Alzubi, Jafar A.
    Alweshah, Mohammed
    Qiqieh, Issa
    Al-Shami, Sara
    Ramachandran, Manikandan
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (20) : 16091 - 16107
  • [6] Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011
    Anagnostopoulos, Christos-Nikolaos
    Iliou, Theodoros
    Giannoukos, Ioannis
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2015, 43 (02) : 155 - 177
  • [7] [Anonymous], 2008, 2008 IEEE REG 10 C, P1, DOI [10.1109/TENCON.2008.4766487, DOI 10.1109/TENCON.2008.4766487]
  • [8] Aouani H, 2018, 2018 4TH INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP)
  • [9] Benavoli A, 2017, J MACH LEARN RES, V18
  • [10] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828