End-to-End Integration of Speech Recognition, Speech Enhancement, and Self-Supervised Learning Representation

被引:28
作者
Chang, Xuankai [1 ]
Maekaku, Takashi [2 ]
Fujita, Yuya [2 ]
Watanabe, Shinji [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Yahoo Japan Corp, Tokyo, Japan
来源
INTERSPEECH 2022 | 2022年
基金
美国国家科学基金会;
关键词
robust automatic speech recognition; self-supervised learning; speech enhancement; deep learning; DEEP NEURAL-NETWORKS;
D O I
10.21437/Interspeech.2022-10839
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This work presents our end-to-end (E2E) automatic speech recognition (ASR) model targetting at robust speech recognition, called Integraded speech Recognition with enhanced speech Input for Self-supervised learning representation (IRIS). Compared with conventional E2E ASR models, the proposed E2E model integrates two important modules including a speech enhancement (SE) module and a self-supervised learning representation (SSLR) module. The SE module enhances the noisy speech. Then the SSLR module extracts features from enhanced speech to be used for speech recognition (ASR). To train the proposed model, we establish an efficient learning scheme. Evaluation results on the monaural CHiME-4 task show that the IRIS model achieves the best performance reported in the literature for the single-channel CHiME-4 benchmark (2.0% for the real development and 3.6% for the real test) thanks to the powerful pre-trained SSLR module and the fine-tuned SE module.
引用
收藏
页码:3819 / 3823
页数:5
相关论文
共 37 条
[1]  
Baevski A., 2020, Advances in neural information processing systems, V33, P12449, DOI [10.48550/arXiv.2006.11477, DOI 10.48550/ARXIV.2006.11477]
[2]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[3]  
Chang X., 2021, Proc. ASRU
[4]  
Chen G., 2021, P INTERSPEECH
[5]  
Chen Sanyuan, 2021, ARXIV211013900
[6]   Building state-of-the-art distant speech recognition using the CHiME-4 challenge with a setup of speech enhancement baseline [J].
Chen, Szu-Jui ;
Subramanian, Aswin Shanmugam ;
Xu, Hainan ;
Watanabe, Shinji .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :1571-1575
[7]  
Du J., 2016, Proceedings of CHiME-4, P36
[8]  
Graves A., 2006, P 23 INT C MACHINE L, P369, DOI [DOI 10.1145/1143844.1143891, 10.1145/1143844.1143891]
[9]  
Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947
[10]  
Hannun Awni., Deep speech: Scaling up end-to-end speech recognition