Speech Processing for Language Learning: A Practical Approach to Computer-Assisted Pronunciation Teaching

被引:23
作者
Bogach, Natalia [1 ]
Boitsova, Elena [1 ]
Chernonog, Sergey [1 ]
Lamtev, Anton [1 ]
Lesnichaya, Maria [1 ]
Lezhenin, Iurii [1 ]
Novopashenny, Audrey [1 ]
Svechnikov, Roman [1 ]
Tsikach, Daria [1 ]
Vasiliev, Konstantin [1 ]
Pyshkin, Evgeny [2 ]
Blake, John [3 ]
机构
[1] Peter Great St Petersburg Polytech Univ, Inst Comp Sci & Technol, St Petersburg 195029, Russia
[2] Univ Aizu, Sch Comp Sci & Engn, Div Informat Syst, Aizu Wakamatsu, Fukushima 9658580, Japan
[3] Univ Aizu, Ctr Language Res, Sch Comp Sci & Engn, Aizu Wakamatsu, Fukushima 9658580, Japan
基金
日本学术振兴会;
关键词
speech processing; computer-assisted pronunciation training (CAPT); voice activity detection (VAD); audio-visual feedback; time warping (DTW); cross-recurrence quantification analysis (CRQA); PROSODY; RECOGNITION; FEEDBACK; LEARNERS; TOOLS;
D O I
10.3390/electronics10030235
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article contributes to the discourse on how contemporary computer and information technology may help in improving foreign language learning not only by supporting better and more flexible workflow and digitizing study materials but also through creating completely new use cases made possible by technological improvements in signal processing algorithms. We discuss an approach and propose a holistic solution to teaching the phonological phenomena which are crucial for correct pronunciation, such as the phonemes; the energy and duration of syllables and pauses, which construct the phrasal rhythm; and the tone movement within an utterance, i.e., the phrasal intonation. The working prototype of StudyIntonation Computer-Assisted Pronunciation Training (CAPT) system is a tool for mobile devices, which offers a set of tasks based on a "listen and repeat" approach and gives the audio-visual feedback in real time. The present work summarizes the efforts taken to enrich the current version of this CAPT tool with two new functions: the phonetic transcription and rhythmic patterns of model and learner speech. Both are designed on a base of a third-party automatic speech recognition (ASR) library Kaldi, which was incorporated inside StudyIntonation signal processing software core. We also examine the scope of automatic speech recognition applicability within the CAPT system workflow and evaluate the Levenstein distance between the transcription made by human experts and that obtained automatically in our code. We developed an algorithm of rhythm reconstruction using acoustic and language ASR models. It is also shown that even having sufficiently correct production of phonemes, the learners do not produce a correct phrasal rhythm and intonation, and therefore, the joint training of sounds, rhythm and intonation within a single learning environment is beneficial. To mitigate the recording imperfections voice activity detection (VAD) is applied to all the speech records processed. The try-outs showed that StudyIntonation can create transcriptions and process rhythmic patterns, but some specific problems with connected speech transcription were detected. The learners feedback in the sense of pronunciation assessment was also updated and a conventional mechanism based on dynamic time warping (DTW) was combined with cross-recurrence quantification analysis (CRQA) approach, which resulted in a better discriminating ability. The CRQA metrics combined with those of DTW were shown to add to the accuracy of learner performance estimation. The major implications for computer-assisted English pronunciation teaching are discussed.
引用
收藏
页码:1 / 22
页数:22
相关论文
共 66 条
[1]   A review of tools and techniques for computer aided pronunciation training (CAPT) in English [J].
Agarwal, Chesta ;
Chakraborty, Pinaki .
EDUCATION AND INFORMATION TECHNOLOGIES, 2019, 24 (06) :3731-3743
[2]  
Akram M., 2014, INT J ENGL ED, V3, P230
[3]  
Anderson-Hsieh Janet., 1992, SYSTEM, V20, P51, DOI [DOI 10.1016/0346-251X, DOI 10.1016/0346-251X(92)90007-P]
[4]  
[Anonymous], 1998, P 5 INT C SPOK LANG
[5]  
[Anonymous], 2011, P IEEE 2011 WORKSHOP
[6]  
Batliner A, 2005, TEXT SPEECH LANG TEC, V25, P21
[7]   Languages and cognition: towards new CALL [J].
Bogach, Natalia .
PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON APPLICATIONS IN INFORMATION TECHNOLOGY (ICAIT - 2018), 2018, :6-8
[8]  
Boitsova E., 2018, Proc. Speech Prosody, V2018, P413, DOI [10.21437/SpeechProsody.2018-84, DOI 10.21437/SPEECHPROSODY.2018-84]
[9]  
Boula_de_Mareuil P., 2011, P 12 ANN C INT SPEEC
[10]  
Brown G., 1983, PROSODY MODELS MEASU, P67, DOI DOI 10.1007/978-3-642-69103-4_6