RWTH ASR Systems for LibriSpeech: Hybrid vs Attention

被引:123
作者
Luescher, Christoph [1 ]
Beck, Eugen [1 ,2 ]
Irie, Kazuki [1 ]
Kitza, Markus [1 ]
Michel, Wilfried [1 ,2 ]
Zeyer, Albert [1 ,2 ]
Schlueter, Ralf [1 ]
Ney, Hermann [1 ,2 ]
机构
[1] Rhein Westfal TH Aachen, Comp Sci Dept, Human Language Technol & Pattern Recognit, D-52074 Aachen, Germany
[2] AppTek GmbH, D-52062 Aachen, Germany
来源
INTERSPEECH 2019 | 2019年
基金
欧洲研究理事会;
关键词
speech recognition; hybrid BLSTM/HMM; attention; LibriSpeech; NEURAL-NETWORKS;
D O I
10.21437/Interspeech.2019-1780
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
We present state-of-the-art automatic speech recognition (ASR) systems employing a standard hybrid DNN/HMM architecture compared to an attention-based encoder-decoder design for the LibriSpeech task. Detailed descriptions of the system development, including model design, pretraining schemes, training schedules, and optimization approaches are provided for both system architectures. Both hybrid DNN/HMM and attention-based systems employ bi-directional LSTMs for acoustic modeling/encoding. For language modeling, we employ both LSTM and Transformer based architectures. All our systems are built using RWTH's open-source toolkits RASR and RETURNN. To the best knowledge of the authors, the results obtained when training on the full LibriSpeech training set, are the best published currently, both for the hybrid DNN/HMM and the attention-based systems. Our single hybrid system even outperforms previous results obtained from combining eight single systems. Our comparison shows that on the LibriSpeech 960h task, the hybrid DNN/HMM system outperforms the attention-based system by 15% relative on the clean and 40% relative on the other test sets in terms of word error rate. Moreover, experiments on a reduced 100h-subset of the LibriSpeech training corpus even show a more pronounced margin between the hybrid DNN/HMM and attention-based architectures.
引用
收藏
页码:231 / 235
页数:5
相关论文
共 40 条
[1]  
[Anonymous], 2007, P ICASSP
[2]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[3]  
Beck E., 2019, LSTM LANGUAGE MODELS, V1907
[4]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[5]  
Chen MX, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P76
[6]  
Chiu CC, 2018, Arxiv, DOI arXiv:1712.01769
[7]  
Doetsch P., 2016, INT C FRONTIERS HAND
[8]  
Doetsch P, 2017, INT CONF ACOUST SPEE, P5345, DOI 10.1109/ICASSP.2017.7953177
[9]  
Dozat T., 2016, INCORPORATING NESTER
[10]  
ehre C . Gulc, 2017, COMPUT SPEECH LANG, V45, P137