INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING

被引:12
作者
Chen, Zhehuai [1 ]
Zhang, Yu [1 ]
Rosenberg, Andrew [1 ]
Ramabhadran, Bhuvana [1 ]
Wang, Gary [1 ]
Moreno, Pedro [1 ]
机构
[1] Google Inc, Mountain View, CA 94043 USA
来源
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU) | 2021年
关键词
Speech Recognition; Speech Synthesis; Self-supervised; Representation learning; RECOGNITION;
D O I
10.1109/ASRU51503.2021.9688018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining for Automated Speech Recognition (ASR) has shown varied degrees of success. In this paper, we propose to jointly learn representations during pretraining from two different modalities: speech and text. The proposed method, tts4pretrain complements the power of contrastive learning in self-supervision with linguistic/lexical representations derived from synthesized speech, effectively learning from untranscribed speech and unspoken text. Lexical learning in the speech encoder is enforced through an additional sequence loss term that is coupled with contrastive loss during pretraining. We demonstrate that this novel pretraining method yields Word Error Rate (WER) reductions of 10% relative on the well-benchmarked, Librispeech task over a state-of-the-art baseline pretrained with wav2vec2.0 only. The proposed method also serves as an effective strategy to compensate for the lack of transcribed speech, effectively matching the performance of 5000 hours of transcribed speech with just 100 hours of transcribed speech on the AMI meeting transcription task. Finally, we demonstrate WER reductions of up to 15% on an in-house Voice Search task over traditional pretraining. Incorporating text into encoder pretraining is complimentary to rescoring with a larger or in-domain language model, resulting in additional 6% relative reduction in WER.
引用
收藏
页码:251 / 258
页数:8
相关论文
共 56 条
[1]  
Baevski A., 2020, Advances in neural information processing systems, V33, P12449, DOI 10.5555/3495724.3496768
[2]   Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text [J].
Baskar, Murali Karthick ;
Watanabe, Shinji ;
Astudillo, Ramon ;
Hori, Takaaki ;
Burget, Lukas ;
Cernocky, Jan .
INTERSPEECH 2019, 2019, :3790-3794
[3]   Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation [J].
Biadsy, Fadi ;
Weiss, Ron J. ;
Moreno, Pedro J. ;
Kanvesky, Dimitri ;
Jia, Ye .
INTERSPEECH 2019, 2019, :4115-4119
[4]   Effectively Building Tera Scale MaxEnt Language Models Incorporating Non-Linguistic Signals [J].
Biadsy, Fadi ;
Ghodsi, Mohammadreza ;
Caseiro, Diamantino .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :2710-2714
[5]  
Carletta J, 2005, LECT NOTES COMPUT SC, V3869, P28
[6]  
Chan William, 2021, ARXIV PREPRINT ARXIV
[7]  
Changhao Shan, 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Proceedings, P5631, DOI 10.1109/ICASSP.2019.8682490
[8]  
Chen Zhehuai, 2020, INTERSPEECH
[9]  
Chen Zhehuai, 2021, INTERSPEECH
[10]  
Chung YA, 2020, INT CONF ACOUST SPEE, P3497, DOI [10.1109/icassp40776.2020.9054438, 10.1109/ICASSP40776.2020.9054438]