Lattice Based Transcription Loss for End-to-End Speech Recognition

被引:0
作者
Kang, Jian [1 ]
Zhang, Wei-Qiang [1 ]
Liu, Jia [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Tsinghua Natl Lab Informat Sci & Technol, Beijing 100084, Peoples R China
来源
2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP) | 2016年
基金
中国国家自然科学基金;
关键词
lattice; transcription loss; end-to-end system; connectionist temporal classification;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
End-to-end speech recognition systems have been successfully implemented and have become competitive replacements for hybrid systems. A common loss function to train end-to-end systems is connectionist temporal classification (CTC). This method maximizes the log likelihood between the feature sequence and the associated transcription sequence. However there are some weaknesses with CTC training. The main weakness is that the training criterion is different from the test criterion, since the training criterion is log likelihood, while the test criterion is word error rate. In this work, we introduce a new lattice based transcription loss function to address this deficiency of CTC training. Compared to the CTC function, our new method optimizes the model directly using the transcription loss. We evaluate this new algorithm in both a small speech recognition task, the Wall Street Journal (WSJ) dataset and a large vocabulary speech recognition task, the Switchboard dataset. Results demonstrate that our algorithm outperforms a traditional CTC criterion, and achieving 7% WER relative reduction. In addition, we compare our new algorithm to some discriminative training algorithms, such as state-level minimum Bayes risk (SMBR) and minimum word error (MWE), showing that our algorithm is more convenient and contains more varieties for speech recognition.
引用
收藏
页数:5
相关论文
共 28 条
[1]  
Amodei D., 2015, CoRR
[2]  
[Anonymous], 2011, P INT C FLOR IT 27 3
[3]  
[Anonymous], 2002, IEEE INT C AC SPEECH
[4]  
Bandanau D., 2015, ARXIV151106456
[5]  
Chorowski J, 2014, ARXIV14121602
[6]   Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition [J].
Dahl, George E. ;
Yu, Dong ;
Deng, Li ;
Acero, Alex .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (01) :30-42
[7]  
Geiger JT, 2014, INTERSPEECH, P631
[8]  
Graves A., 2006, PROC INT C MACH LEAR, P369, DOI DOI 10.1145/1143844.1143891
[9]  
Graves A, 2014, PR MACH LEARN RES, V32, P1764
[10]  
Graves A, 2013, 2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), P273, DOI 10.1109/ASRU.2013.6707742