Robust emotional speech recognition based on binaural model and emotional auditory mask in noisy environments

被引:12
作者
Bashirpour, Meysam [1 ]
Geravanchizadeh, Masoud [1 ]
机构
[1] Univ Tabriz, Fac Elect & Comp Engn, Tabriz 5166615813, Iran
来源
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING | 2018年
关键词
Emotional speech recognition; Binaural model; Emotional auditory mask; Classification of emotional states; Kaldi speech recognition system; Noise robustness; INTELLIGIBILITY; FEATURES; DATABASE;
D O I
10.1186/s13636-018-0133-9
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The performance of automatic speech recognition systems degrades in the presence of emotional states and in adverse environments (e.g., noisy conditions). This greatly limits the deployment of speech recognition application in realistic environments. Previous studies in the emotion-affected speech recognition field focus on improving emotional speech recognition using clean speech data recorded in a quiet environment (i.e., controlled studio settings). The goal of this research is to increase the robustness of speech recognition systems for emotional speech in noisy conditions. The proposed binaural emotional speech recognition system is based on the analysis of binaural input signal and an estimated emotional auditory mask corresponding to the recognized emotion. Whereas the binaural signal analyzer has the task of segregating speech from noise and constructing speech mask in a noisy environment, the estimated emotional mask identifies and removes the most emotionally affected spectra-temporal regions of the segregated target speech. In other words, our proposed system combines the two estimated masks (binary mask and emotion-specific mask) of noise and emotion, as a way to decrease the word error rate for noisy emotional speech. The performance of the proposed binaural system is evaluated in clean neutral train/noisy emotional test scenarios for different noise types, signal-to-noise ratios, and spatial configurations of sources. Speech utterances of the Persian emotional speech database are used for the experimental purposes. Simulation results show that the proposed system achieves higher performance, as compared with automatic speech recognition systems chosen as baseline trained with neutral utterances.
引用
收藏
页数:13
相关论文
共 36 条
  • [1] [Anonymous], 1997, Statistical methods for speech recognition
  • [2] [Anonymous], 2011, IEEE 2011 WORKSHOP A
  • [3] [Anonymous], 1988, Annex C of the SVOS Final Report: Part A: The Auditory Filterbank
  • [4] ASR emotional speech: Clarifying the issues and enhancing performance
    Athanaselis, T
    Bakamidis, S
    Dologlou, I
    Cowie, R
    Douglas-Cowie, E
    Cox, C
    [J]. NEURAL NETWORKS, 2005, 18 (04) : 437 - 444
  • [5] BASHIRPOUR M, 2016, IRANIAN J ELECT ELEC, V12, P197
  • [6] Automatic speech recognition and speech variability: A review
    Benzeghiba, M.
    De Mori, R.
    Deroo, O.
    Dupont, S.
    Erbes, T.
    Jouvet, D.
    Fissore, L.
    Laface, P.
    Mertins, A.
    Ris, C.
    Rose, R.
    Tyagi, V.
    Wellekens, C.
    [J]. SPEECH COMMUNICATION, 2007, 49 (10-11) : 763 - 786
  • [7] SUPPRESSION OF ACOUSTIC NOISE IN SPEECH USING SPECTRAL SUBTRACTION
    BOLL, SF
    [J]. IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1979, 27 (02): : 113 - 120
  • [8] BOSCH LT, 2003, SPEECH COMMUN, V40, P213
  • [9] Bregman AS., 1994, AUDITORY SCENE ANAL
  • [10] Burkhardt F., 2005, Interspeech, P1517, DOI DOI 10.21437/INTERSPEECH.2005-446