Emphasizing unseen words: New vocabulary acquisition for end-to-end speech recognition

被引:8
作者
Qu, Leyuan [1 ,2 ]
Weber, Cornelius [1 ]
Wermter, Stefan [1 ]
机构
[1] Univ Hamburg, Dept Informat, Knowledge Technol, Hamburg, Germany
[2] Zhejiang Lab, Dept Artificial Intelligence, Hangzhou, Peoples R China
关键词
Automatic speech recognition; Continual learning; Out-of-vocabulary word recognition; End-to-end learning; Loss rescaling; NEURAL-NETWORKS; ATTENTION;
D O I
10.1016/j.neunet.2023.01.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the dynamic nature of human language, automatic speech recognition (ASR) systems need to continuously acquire new vocabulary. Out-Of-Vocabulary (OOV) words, such as trending words and new named entities, pose problems to modern ASR systems that require long training times to adapt their large numbers of parameters. Different from most previous research focusing on language model post-processing, we tackle this problem on an earlier processing level and eliminate the bias in acoustic modeling to recognize OOV words acoustically. We propose to generate OOV words using text-to -speech systems and to rescale losses to encourage neural networks to pay more attention to OOV words. Specifically, we enlarge the classification loss used for training neural networks' parameters of utterances containing OOV words (sentence-level), or rescale the gradient used for back-propagation for OOV words (word-level), when fine-tuning a previously trained model on synthetic audio. To overcome catastrophic forgetting, we also explore the combination of loss rescaling and model regularization, i.e. L2 regularization and elastic weight consolidation (EWC). Compared with previous methods that just fine-tune synthetic audio with EWC, the experimental results on the LibriSpeech benchmark reveal that our proposed loss rescaling approach can achieve significant improvement on the recall rate with only a slight decrease on word error rate. Moreover, word-level rescaling is more stable than utterance-level rescaling and leads to higher recall rates and precision rates on OOV word recognition. Furthermore, our proposed combined loss rescaling and weight consolidation methods can support continual learning of an ASR system.(c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:494 / 504
页数:11
相关论文
共 66 条
[1]  
Afouras T, 2018, Arxiv, DOI arXiv:1809.00496
[2]   DRILL: Dynamic Representations for Imbalanced Lifelong Learning [J].
Ahrens, Kyra ;
Abawi, Fares ;
Wermter, Stefan .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 :409-420
[3]  
Aleksic P, 2015, INT CONF ACOUST SPEE, P5172, DOI 10.1109/ICASSP.2015.7178957
[4]  
Amodei D, 2016, PR MACH LEARN RES, V48
[5]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[6]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
[7]  
Braun S, 2018, INTERSPEECH, P17
[8]  
Brown P. F., 1992, Computational Linguistics, V18, P467
[9]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[10]  
Chorowski J, 2015, ADV NEUR IN, V28