Integrating Knowledge Into End-to-End Speech Recognition From External Text-Only Data

被引:4
|
作者
Bai, Ye [1 ]
Yi, Jiangyan [2 ]
Tao, Jianhua [2 ,3 ]
Wen, Zhengqi [2 ]
Tian, Zhengkun [1 ]
Zhang, Shuai [1 ]
机构
[1] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
End-to-End; language modeling; speech recognition; teacher-student learning; transfer learning; NETWORK LANGUAGE MODELS;
D O I
10.1109/TASLP.2021.3066274
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because of the end-to-end training, an AED model is usually trained with speech-text paired data. It is challenging to incorporate external text-only data into AED models. Another issue of the AED model is that it does not use the right context of a text token while predicting the token. To alleviate the above two issues, we propose a unified method called LST (Learn Spelling from Teachers) to integrate knowledge into an AED model from the external text-only data and leverage the whole context in a sentence. The method is divided into two stages. First, in the representation stage, a language model is trained on the text. It can be seen as that the knowledge in the text is compressed into the LM. Then, at the transferring stage, the knowledge is transferred to the AED model via teacher-student learning. To further use the whole context of the text sentence, we propose an LM called causal cloze completer (COR), which estimates the probability of a token, given both the left context and the right context of it. Therefore, with LST training, the AED model can leverage the whole context in the sentence. Different from fusion based methods, which use LM during decoding, the proposed method does not increase any extra complexity at the inference stage. We conduct experiments on two scales of public Chinese datasets AISHELL-1 and AISHELL-2. The experimental results demonstrate the effectiveness of leveraging external text-only data and the whole context in a sentence with our proposed method, compared with baseline hybrid systems and AED model based systems.
引用
收藏
页码:1340 / 1351
页数:12
相关论文
共 50 条
  • [1] Internal Language Model Adaptation with Text-Only Data for End-to-End Speech Recognition
    Meng, Zhong
    Gaur, Yashesh
    Kanda, Naoyuki
    Li, Jinyu
    Chen, Xie
    Wu, Yu
    Gong, Yifan
    INTERSPEECH 2022, 2022, : 2608 - 2612
  • [2] Text-Only Domain Adaptation for End-to-End Speech Recognition through Down-Sampling Acoustic Representation
    Zhu, Jiaxu
    Tong, Weinan
    Xu, Yaoxun
    Song, Changhe
    Wu, Zhiyong
    You, Zhao
    Su, Dan
    Yu, Dong
    Meng, Helen
    INTERSPEECH 2023, 2023, : 1334 - 1338
  • [3] Text Only Domain Adaptation with Phoneme Guided Data Splicing for End-to-End Speech Recognition
    Wang, Wei
    Gong, Xun
    Shao, Hang
    Yang, Dongning
    Qian, Yanmin
    INTERSPEECH 2023, 2023, : 3347 - 3351
  • [4] Multitask Training with Text Data for End-to-End Speech Recognition
    Wang, Peidong
    Sainath, Tara N.
    Weiss, Ron J.
    INTERSPEECH 2021, 2021, : 2566 - 2570
  • [5] End-to-end Speech-to-Punctuated-Text Recognition
    Nozaki, Jumon
    Kawahara, Tatsuya
    Ishizuka, Kenkichi
    Hashimoto, Taiichi
    INTERSPEECH 2022, 2022, : 1811 - 1815
  • [6] Speech-and-Text Transformer: Exploiting Unpaired Text for End-to-End Speech Recognition
    Wang, Qinyi
    Zhou, Xinyuan
    Li, Haizhou
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2023, 12 (01)
  • [7] Text-only domain adaptation for end-to-end ASR using integrated text-to-mel-spectrogram generator
    Bataev, Vladimir
    Korostik, Roman
    Shabalin, Evgeny
    Lavrukhin, Vitaly
    Ginsburg, Boris
    INTERSPEECH 2023, 2023, : 2928 - 2932
  • [8] You Do Not Need More Data: Improving End-To-End Speech Recognition by Text-To-Speech Data Augmentation
    Laptev, Aleksandr
    Korostik, Roman
    Svischev, Aleksey
    Andrusenko, Andrei
    Medennikov, Ivan
    Rybin, Sergey
    2020 13TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2020), 2020, : 439 - 444
  • [9] Integrating Lattice-Free MMI Into End-to-End Speech Recognition
    Tian, Jinchuan
    Yu, Jianwei
    Weng, Chao
    Zou, Yuexian
    Yu, Dong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 (25-38) : 25 - 38
  • [10] An End-to-End Chinese Speech Recognition Algorithm Integrating Language Model
    Lü, Kun-Ru
    Wu, Chun-Guo
    Liang, Yan-Chun
    Yuan, Yu-Ping
    Ren, Zhi-Min
    Zhou, You
    Shi, Xiao-Hu
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (11): : 2177 - 2185