Speech-enhanced and Noise-aware Networks for Robust Speech Recognition

被引:1
作者
Lee, Hung-Shin [1 ]
Chen, Pin-Yuan [1 ]
Cheng, Yao-Fei [1 ]
Tsao, Yu [2 ]
Wang, Hsin-Min [1 ]
机构
[1] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
来源
2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP) | 2022年
关键词
robust speech recognition; autoencoder; multi-condition training; noise-aware training; DEEP;
D O I
10.1109/ISCSLP57327.2022.10037796
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compensation for channel mismatch and noise interference is essential for robust automatic speech recognition. Enhanced speech has been introduced into the multi-condition training of acoustic models to improve their generalization ability. In this paper, a noise-aware training framework based on two cascaded neural structures is proposed to jointly optimize speech enhancement and speech recognition. The feature enhancement module is composed of a multi-task autoencoder, where noisy speech is decomposed into clean speech and noise. By concatenating its enhanced, noise-aware, and noisy features for each frame, the acoustic-modeling module maps each feature-augmented frame into a triphone state by optimizing the lattice-free maximum mutual information and cross entropy between the predicted and actual state sequences. On top of the factorized time delay neural network (TDNN-F) and its convolutional variant (CNN-TDNNF), both with SpecAug, the two proposed systems achieve word error rate (WER) of 3.90% and 3.55%, respectively, on the Aurora-4 task. Compared with the best existing systems that use bigram and trigram language models for decoding, the proposed CNN-TDNNF-based system achieves a relative WER reduction of 15.20% and 33.53%, respectively. In addition, the proposed CNN-TDNNF-based system also outperforms the baseline CNN-TDNNF system on the AMI task.
引用
收藏
页码:145 / 149
页数:5
相关论文
共 41 条
[1]  
[Anonymous], 2008, P ICML
[2]  
[Anonymous], 2011, IEEE 2011 WORKSHOP
[3]  
[Anonymous], 2015, P ICASSP
[4]  
[Anonymous], 2002, Aurora working group
[5]   SUPPRESSION OF ACOUSTIC NOISE IN SPEECH USING SPECTRAL SUBTRACTION [J].
BOLL, SF .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1979, 27 (02) :113-120
[6]  
Carletta J, 2005, LECT NOTES COMPUT SC, V3869, P28
[7]   Front-End Factor Analysis for Speaker Verification [J].
Dehak, Najim ;
Kenny, Patrick J. ;
Dehak, Reda ;
Dumouchel, Pierre ;
Ouellet, Pierre .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (04) :788-798
[8]   Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition [J].
Fan, Cunhang ;
Yi, Jiangyan ;
Tao, Jianhua ;
Tian, Zhengkun ;
Liu, Bin ;
Wen, Zhengqi .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :198-209
[9]  
Fan CH, 2019, ASIAPAC SIGN INFO PR, P662, DOI [10.1109/apsipaasc47483.2019.9023216, 10.1109/APSIPAASC47483.2019.9023216]
[10]   One-pass single-channel noisy speech recognition using a combination of noisy and enhanced features [J].
Fujimoto, Masakiyo ;
Kawai, Hisashi .
INTERSPEECH 2019, 2019, :486-490