Czech HMM-Based Speech Synthesis

被引:0
作者
Hanzlicek, Zdenek [1 ]
机构
[1] Univ W Bohemia, Dept Cybernet, Plzen 30614, Czech Republic
来源
TEXT, SPEECH AND DIALOGUE | 2010年 / 6231卷
关键词
HMM-based speech synthesis; TTS; Czech language; CORPUS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, first experiments on statistical parametric HMM-based speech synthesis for the Czech language are described. In this synthesis method, trajectories of speech parameters are generated from the trained hidden Markov models. A final speech waveform is synthesized from those speech parameters. In our experiments, spectral properties were represented by mel cepstrum coefficients. For the waveform synthesis, the corresponding MLSA filter excited by pulses or noise was utilized. Beside that basic setup, a high-quality analysis/synthesis system STRAIGHT was employed for more sophisticated speech representation. For a more robust model parameter estimation, HMMs are clustered by using decision tree-based context clustering algorithm. For this purpose, phonetic and prosodic contextual factors proposed for the Czech language are taken into account. The created clustering trees are also employed for synthesis of speech units unseen within the training stage. The evaluation by subjective listening tests showed that speech produced by the combination of HMM-based TTS system and STRAIGHT is of comparable quality as speech synthesised by the unit selection TTS system trained from the same speech data.
引用
收藏
页码:291 / 298
页数:8
相关论文
共 14 条
[1]  
[Anonymous], SPEECH SIGNAL PROCES
[2]  
[Anonymous], The HMM-based speech synthesis system (HTS)
[3]   Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction:: Possible role of a repetitive structure in sounds [J].
Kawahara, H ;
Masuda-Katsuse, I ;
de Cheveigné, A .
SPEECH COMMUNICATION, 1999, 27 (3-4) :187-207
[4]  
Maia R., 2007, Proc. ISCA SSW6, P131
[5]  
Matousek J., 2005, P INT LISB PORT, P2529
[6]  
Matousek J, 2007, LECT NOTES ARTIF INT, V4629, P326
[7]  
Romportl J, 2004, LECT NOTES COMPUT SC, V3206, P441
[8]  
Romportl J, 2008, LECT NOTES ARTIF INT, V5246, P493, DOI 10.1007/978-3-540-87391-4_63
[9]  
Tihelka D, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, P2042
[10]  
Tokuda K, 2002, PROCEEDINGS OF THE 2002 IEEE WORKSHOP ON SPEECH SYNTHESIS, P227