Residual Language Model for End-to-end Speech Recognition

被引:5
作者
Tsunoo, Emiru [1 ]
Kashiwagi, Yosuke [1 ]
Narisetty, Chaitanya [2 ]
Watanabe, Shinji [2 ]
机构
[1] Sony Grp Corp, Tokyo, Japan
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
INTERSPEECH 2022 | 2022年
关键词
speech recognition; language model; attention-based encoder-decoder; internal language model estimation;
D O I
10.21437/Interspeech.2022-10557
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
End-to-end automatic speech recognition suffers from adaptation to unknown target domain speech despite being trained with a large amount of paired audio-text data. Recent studies estimate a linguistic bias of the model as the internal language model (LM). To effectively adapt to the target domain, the internal LM is subtracted from the posterior during inference and fused with an external target-domain LM. However, this fusion complicates the inference and the estimation of the internal LM may not always be accurate. In this paper, we propose a simple external LM fusion method for domain adaptation, which considers the internal LM estimation in its training. We directly model the residual factor of the external and internal LMs, namely the residual LM. To stably train the residual LM, we propose smoothing the estimated internal LM and optimizing it with a combination of cross-entropy and mean-squared-error losses, which consider the statistical behaviors of the internal LM in the target domain data. We experimentally confirmed that the proposed residual LM performs better than the internal LM estimation in most of the cross-domain and intra-domain scenarios.
引用
收藏
页码:3899 / 3903
页数:5
相关论文
共 36 条
[1]  
Amodei D, 2016, PR MACH LEARN RES, V48
[2]   CONSTRUCTION OF A LARGE-SCALE JAPANESE ASR CORPUS ON TV RECORDINGS [J].
Ando, Shintaro ;
Fujihara, Hiromasa .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6948-6952
[3]  
[Anonymous], 2000, P 2 INT C LANG RES E
[4]  
Bu H., 2017, 2017 20 C ORIENTAL C, P1
[5]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[6]  
Changhao Shan, 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Proceedings, P5631, DOI 10.1109/ICASSP.2019.8682490
[7]  
Chorowski J.K., 2015, NEURIPS, DOI DOI 10.1016/0167-739X(94)90007-8
[8]   Earnings-21: A Practical Benchmark for ASR in the Wild [J].
Del Rio, Miguel ;
Delworth, Natalie ;
Westerman, Ryan ;
Huang, Michelle ;
Bhandari, Nishchal ;
Palakapilly, Joseph ;
McNamara, Quinten ;
Dong, Joshua ;
Zelasko, Piotr ;
Jette, Miguel .
INTERSPEECH 2021, 2021, :3465-3469
[9]  
Delcroix M, 2016, INT CONF ACOUST SPEE, P5270, DOI 10.1109/ICASSP.2016.7472683
[10]  
Graves A., 2006, 23 ICML, P369, DOI DOI 10.1145/1143844.1143891