Towards better decoding and language model integration in sequence to sequence models

被引:122
作者
Chorowski, Jan [1 ]
Jaitly, Navdeep [2 ]
机构
[1] Google Brain, Mountain View, CA USA
[2] Nvidia, Santa Clara, CA 95051 USA
来源
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION | 2017年
关键词
attention mechanism; recurrent neural networks; LSTM;
D O I
10.21437/Interspeech.2017-343
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recently proposed Sequence-to-Sequence (seq2seq) framework advocates replacing complex data processing pipelines, such as an entire automatic speech recognition system, with a single neural network trained in an end-to-end fashion. In this contribution, we analyse an attention-based seq2seq speech recognition system that directly transcribes recordings into characters. We observe two shortcomings: overconfidence in its predictions and a tendency to produce incomplete transcriptions when language models are used. We propose practical solutions to both problems achieving competitive speaker independent word error rates on the Wall Street Journal dataset: without separate language models we reach 10.6% WER, while together with a trigram language model, we reach 6.7% WER. a state-of-the-art result for HMM-free methods.
引用
收藏
页码:523 / 527
页数:5
相关论文
共 35 条
[1]  
Allauzen C, 2007, LECT NOTES COMPUT SC, V4783, P11
[2]  
Andor Daniel., 2016, CoRR
[3]  
[Anonymous], 2014, Advances in neural information processing systems
[4]  
[Anonymous], ICLR 2017 UNPUB
[5]  
[Anonymous], 2015, COMPUTER SCI
[6]  
[Anonymous], CORR
[7]  
[Anonymous], 2016, ARXIV161003035
[8]  
[Anonymous], 2016, ARXIV PREPRINT ARXIV
[9]  
[Anonymous], 2015, CoRR
[10]  
[Anonymous], 2016, ARXIV161105358