EXPLORING NON-AUTOREGRESSIVE END-TO-END NEURAL MODELING FOR ENGLISH MISPRONUNCIATION DETECTION AND DIAGNOSIS

被引:5
作者
Wang, Hsin-Wei [1 ]
Yan, Bi-Cheng [1 ]
Chiu, Hsuan-Sheng [2 ]
Hsu, Yung-Chang [3 ]
Chen, Berlin [1 ]
机构
[1] Natl Taiwan Normal Univ, Taipei, Taiwan
[2] Chunghwa Telecom Labs, Taipei, Taiwan
[3] EZAI, Taipei, Taiwan
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
Computer-assisted language training; mispronunciation detection and diagnosis; non-autoregressive; pronunciation modeling;
D O I
10.1109/ICASSP43922.2022.9747569
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
End-to-end (E2E) neural modeling has emerged as one predominant school of thought to develop computer-assisted pronunciation training (CAPT) systems, showing competitive performance to conventional pronunciation-scoring based methods. However, current E2E neural methods for CAPT are faced with at least two pivotal challenges. On one hand, most of the E2E methods operate in an autoregressive manner with left-to-right beam search to dictate the pronunciations of an L2 learners. This however leads to very slow inference speed, which inevitably hinders their practical use. On the other hand, E2E neural methods are normally data-hungry and meanwhile an insufficient amount of nonnative training data would often reduce their efficacy on mispronunciation detection and diagnosis (MD&D). In response, we put forward a novel MD&D method that leverages non-autoregressive (NAR) E2E neural modeling to dramatically speed up the inference time while maintaining performance in line with the conventional E2E neural methods. In addition, we design and develop a pronunciation modeling network stacked on top of the NAR E2E models of our method to further boost the effectiveness of MD&D. Empirical experiments conducted on the L2-ARCTIC English dataset seems to validate the feasibility of our method, in comparison to some top-of-the-line E2E models and an iconic pronunciation-scoring based method built on a DNN-HMM acoustic model.
引用
收藏
页码:6817 / 6821
页数:5
相关论文
共 30 条
[1]  
Baevski Alexei., 2020, Advances in neural information processing systems, V33, P12449
[2]   Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition [J].
Bai, Ye ;
Yi, Jiangyan ;
Tao, Jianhua ;
Tian, Zhengkun ;
Wen, Zhengqi ;
Zhang, Shuai .
INTERSPEECH 2020, 2020, :3381-3385
[3]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
[4]   ASR-Free Pronunciation Assessment [J].
Cheng, Sitong ;
Liu, Zhixin ;
Li, Lantian ;
Tang, Zhiyuan ;
Wang, Dong ;
Zheng, Thomas Fang .
INTERSPEECH 2020, 2020, :3047-3051
[5]  
Chi EA, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P1920
[6]  
Chorowski J., 2014, P NIPS WORKSH DEEP L
[7]  
Chorowski J, 2015, ADV NEUR IN, V28
[8]  
Evanini K, 2013, INTERSPEECH, P2434
[9]  
Feng YQ, 2020, INT CONF ACOUST SPEE, P3492, DOI 10.1109/ICASSP40776.2020.9052975
[10]  
Fu K., 2021, ARXIV210408428