Knowledge-based Linguistic Encoding for End-to-End Mandarin Text-to-Speech Synthesis

被引:7
作者
Li, Jingbei [1 ]
Wu, Zhiyong [1 ]
Li, Runnan [1 ]
Zhi, Pengpeng [2 ]
Yang, Song [2 ]
Meng, Helen [3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] TAL Educ Grp, AI Lab, Beijing, Peoples R China
[3] Chinese Univ Hong Kong, Hong Kong, Peoples R China
来源
INTERSPEECH 2019 | 2019年
基金
中国国家自然科学基金;
关键词
end-to-end text-to-speech system; knowledge-based learning; linguistic encoding; multi-task learning;
D O I
10.21437/Interspeech.2019-1118
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Recent researches have shown superior performance of applying end-to-end architecture in text-to-speech (TTS) synthesis. However, considering the complex linguistic structure of Chinese, using Chinese characters directly for Mandarin TTS may suffer from the poor linguistic encoding performance, resulting in improper word tokenization and pronunciation errors. To ensure the naturalness and intelligibility of synthetic speech, state-of-the-art Mandarin TTS systems employ a list of components, such as word tokenization, part-of-speech (POS) tagging and grapheme-to-phoneme (G2P) conversion, to produce knowledge-enhanced inputs to alleviate the problems caused by linguistic encoding. These components are based on linguistic expertise and well-designed, but trained individually, leading to errors compounding for the TTS system. In this paper, to reduce the complexity of Mandarin TTS system and bring further improvement, we proposed a knowledge-based linguistic encoder for the character-based end-to-end Mandarin TTS system. Developed with multi-task learning structure, the proposed encoder can learn from linguistic analysis subtasks, providing robust and discriminative linguistic encodings for the following speech generation decoder. Experimental results demonstrate the effectiveness of the proposed framework, with word tokenization error dropped from 12.81% to 1.58%, syllable pronunciation error dropped from 10.89% to 2.81% compared with state-of-the-art baseline approach, providing mean opinion score (MOS) improvement from 3.76 to 3.87.
引用
收藏
页码:4494 / 4498
页数:5
相关论文
共 25 条
[1]  
[Anonymous], **DATA OBJECT**, DOI DOI 10.5281/ZENODO.2581871
[2]  
[Anonymous], **DATA OBJECT**, DOI DOI 10.18170/DVN/SEYRX5
[3]  
Arik SÖ, 2017, ADV NEUR IN, V30
[4]  
Arik SO, 2017, PR MACH LEARN RES, V70
[5]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[6]  
Cho K, 2014, ARXIV14061078
[7]  
Chorowski J, 2015, ADV NEUR IN, V28
[8]  
Databaker technology Inc. (Beijing), 2019, OP SOURC CHIN FEM VO
[9]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[10]  
Graves A, 2014, PR MACH LEARN RES, V32, P1764