Automatic lexical stress and pitch accent detection for L2 English speech using multi-distribution deep neural networks

被引:36
作者
Li, Kun [1 ,2 ]
Mao, Shaoguang [3 ]
Li, Xu [1 ]
Wu, Zhiyong [3 ]
Meng, Helen [1 ,3 ]
机构
[1] Chinese Univ Hong Kong, Dept Syst Engn & Engn Management, Hong Kong, Hong Kong, Peoples R China
[2] SpeechX Ltd, Beijing, Peoples R China
[3] Tsinghua Univ, Grad Sch Shenzhen, Tsinghua CUHK Joint Res Ctr Media Sci Technol & S, Beijing, Peoples R China
关键词
Lexical stress; Pitch accent; Non-native English; Language learning; Deep neural networks; PROMINENCE; FEATURES;
D O I
10.1016/j.specom.2017.11.003
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper investigates the use of multi-distribution deep neural networks (MD-DNNs) for automatic lexical stress detection and pitch accent detection, which are useful for suprasegmental mispronunciation detection and diagnosis in second-language (L2) English speech. The features used in this paper cover syllable-based prosodic features (including maximum syllable loudness, syllable nucleus duration and a pair of dynamic pitch values) as well as lexical and syntactic features (encoded as binary variables). As stressed/accented syllables are more prominent than their neighbors, the two preceding and two following syllables are also taken into consideration. Experimental results show that the MD-DNN for lexical stress detection achieves an accuracy of 87.9% in syllable classification (primary/secondary/no stress) for words with three or more syllables. This performance is much better than those of our previous work using Gaussian mixture models (GMMs) and the prominence model (PM), whose accuracies are 72.1% and 76.3% respectively. Approached similarly as the lexical stress detector, the pitch accent detector obtains an accuracy of 90.2%, which is better than the results of using the GMMs and PM by about 9.6% and 6.9% respectively.
引用
收藏
页码:28 / 36
页数:9
相关论文
共 69 条
[51]   Exploiting acoustic and syntactic features for automatic prosody labeling in a maximum entropy framework [J].
Sridhar, Vivek Kumar Rangarajan ;
Bangalore, Srinivas ;
Narayanan, Shrikanth S. .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2008, 16 (04) :797-811
[52]  
Sun X.-J., 2002, P INT C SPOK LANG PR
[53]   Prosodic prominence detection in speech [J].
Tamburini, F .
SEVENTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOL 1, PROCEEDINGS, 2003, :385-388
[54]  
Tamburini F., 2014, P SPEECH PROS
[55]   Analysis and synthesis of intonation using the Tilt model [J].
Taylor, P .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2000, 107 (03) :1697-1714
[56]   THE RISE FALL CONNECTION MODEL OF INTONATION [J].
TAYLOR, P .
SPEECH COMMUNICATION, 1994, 15 (1-2) :169-186
[57]  
Tepperman J, 2005, INT CONF ACOUST SPEE, P937
[58]   Acoustic characteristics of lexical stress in continuous telephone speech [J].
van Kuijk, D ;
Boves, L .
SPEECH COMMUNICATION, 1999, 27 (02) :95-111
[59]  
Waibel A., 1986, ICASSP 86 Proceedings. IEEE-IECEJ-ASJ International Conference on Acoustics, Speech and Signal Processing (Cat. No.86CH2243-4), P2287
[60]   PHONEME RECOGNITION USING TIME-DELAY NEURAL NETWORKS [J].
WAIBEL, A ;
HANAZAWA, T ;
HINTON, G ;
SHIKANO, K ;
LANG, KJ .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1989, 37 (03) :328-339