Robust emotional speech recognition based on binaural model and emotional auditory mask in noisy environments

被引:12
作者
Bashirpour, Meysam [1 ]
Geravanchizadeh, Masoud [1 ]
机构
[1] Univ Tabriz, Fac Elect & Comp Engn, Tabriz 5166615813, Iran
来源
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING | 2018年
关键词
Emotional speech recognition; Binaural model; Emotional auditory mask; Classification of emotional states; Kaldi speech recognition system; Noise robustness; INTELLIGIBILITY; FEATURES; DATABASE;
D O I
10.1186/s13636-018-0133-9
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The performance of automatic speech recognition systems degrades in the presence of emotional states and in adverse environments (e.g., noisy conditions). This greatly limits the deployment of speech recognition application in realistic environments. Previous studies in the emotion-affected speech recognition field focus on improving emotional speech recognition using clean speech data recorded in a quiet environment (i.e., controlled studio settings). The goal of this research is to increase the robustness of speech recognition systems for emotional speech in noisy conditions. The proposed binaural emotional speech recognition system is based on the analysis of binaural input signal and an estimated emotional auditory mask corresponding to the recognized emotion. Whereas the binaural signal analyzer has the task of segregating speech from noise and constructing speech mask in a noisy environment, the estimated emotional mask identifies and removes the most emotionally affected spectra-temporal regions of the segregated target speech. In other words, our proposed system combines the two estimated masks (binary mask and emotion-specific mask) of noise and emotion, as a way to decrease the word error rate for noisy emotional speech. The performance of the proposed binaural system is evaluated in clean neutral train/noisy emotional test scenarios for different noise types, signal-to-noise ratios, and spatial configurations of sources. Speech utterances of the Persian emotional speech database are used for the experimental purposes. Simulation results show that the proposed system achieves higher performance, as compared with automatic speech recognition systems chosen as baseline trained with neutral utterances.
引用
收藏
页数:13
相关论文
共 36 条
  • [12] Emotion recognition in human-computer interaction
    Cowie, R
    Douglas-Cowie, E
    Tsapatsoulis, N
    Votsis, G
    Kollias, S
    Fellenz, W
    Taylor, JG
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) : 32 - 80
  • [13] Survey on speech emotion recognition: Features, classification schemes, and databases
    El Ayadi, Moataz
    Kamel, Mohamed S.
    Karray, Fakhri
    [J]. PATTERN RECOGNITION, 2011, 44 (03) : 572 - 587
  • [14] Robust continuous speech recognition using parallel model combination
    Gales, MJF
    Young, SJ
    [J]. IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 1996, 4 (05): : 352 - 359
  • [15] Gardner B., 1994, HRTF MEASUREMENTS KE
  • [16] Speech intelligibility and localization in a multi-source environment
    Hawley, ML
    Litovsky, RY
    Colburn, HS
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1999, 105 (06) : 3436 - 3448
  • [17] Should recognizers have ears?
    Hermansky, H
    [J]. SPEECH COMMUNICATION, 1998, 25 (1-3) : 3 - 27
  • [18] HERMANSKY H, 1993, RECOGNITION SPEECH A, P83
  • [19] EMOTIONAL SPEECH RECOGNITION BASED ON STYLE ESTIMATION AND ADAPTATION WITH MULTIPLE-REGRESSION HMM
    Ijima, Yusuke
    Tachibana, Makoto
    Nose, Takashi
    Kobayashi, Takao
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1- 8, PROCEEDINGS, 2009, : 4157 - 4160
  • [20] Keshtiari N, 2015, BEHAV RES METHODS, V47, P275, DOI 10.3758/s13428-014-0467-x