Fast End-to-End Speech Recognition Via Non-Autoregressive Models and Cross-Modal Knowledge Transferring From BERT

被引:43
作者
Bai, Ye [1 ,2 ]
Yi, Jiangyan [2 ]
Tao, Jianhua [3 ]
Tian, Zhengkun [1 ,2 ]
Wen, Zhengqi [2 ]
Zhang, Shuai [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, NLPR Inst Automat, Beijing 100190, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Speech recognition; fast; end-to-end; non-autoregressive; attention; BERT; cross-modal; transfer learning; ATTENTION; ASR; TRANSFORMER; NETWORKS;
D O I
10.1109/TASLP.2021.3082299
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel. This makes the inference speed relatively slow. In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once). The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speech signal rather than autoregressive modeling on tokens. Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performance of speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50x and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model.
引用
收藏
页码:1897 / 1911
页数:15
相关论文
共 67 条
[1]  
[Anonymous], 2012, INT C MACHINE LEARNI
[2]  
[Anonymous], 2018, INT C LEARN REPR
[3]  
[Anonymous], 2018, P INT C LEARN REPR
[4]  
[Anonymous], 2016, P 29 IEEE C COMPUTER
[5]  
[Anonymous], 2014, P INT 2014
[6]  
[Anonymous], 2016, J. Mach. Learn. Res.
[7]   End-to-End ASR-Free Keyword Search From Speech [J].
Audhkhasi, Kartik ;
Rosenberg, Andrew ;
Sethy, Abhinav ;
Ramabhadran, Bhuvana ;
Kingsbury, Brian .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (08) :1351-1359
[8]  
Bai Y., 2020, P INTERSPEECH, P3381
[9]  
Bai Ye, 2019, INTERSPEECH 2019, P3795
[10]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618