Cross-lingual, Multi-speaker Text-To-Speech Synthesis Using Neural Speaker Embedding

被引:31
作者
Chen, Mengnan [1 ]
Chen, Minchuan [2 ]
Liang, Shuang [2 ]
Ma, Jun [2 ]
Chen, Lei [1 ]
Wang, Shaojun [2 ]
Xiao, Jing [2 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Ping An Technol, Shenzhen, Guangdong, Peoples R China
来源
INTERSPEECH 2019 | 2019年
关键词
neural TTS; multi-speaker modeling; multilanguage; speaker embedding;
D O I
10.21437/Interspeech.2019-1632
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Neural network-based model for text-to-speech (TTS) synthesis has made significant progress in recent years. In this paper, we present a cross-lingual, multi-speaker neural end-to-end TTS framework which can model speaker characteristics and synthesize speech in different languages. We implement the model by introducing a separately trained neural speaker embedding network, which can represent the latent structure of different speakers and language pronunciations. We train the speech synthesis network bilingually and prove the possibility of synthesizing Chinese speaker's English speech and vice versa. We explore different methods to fit a new speaker using only a few speech samples. The experimental results show that, with only several minutes of audio from a new speaker, the proposed model can synthesize speech bilingually and acquire decent naturalness and similarity for both languages.
引用
收藏
页码:2105 / 2109
页数:5
相关论文
共 25 条
[1]  
[Anonymous], 2017, ICLR
[2]  
[Anonymous], 2017, Char2wav: End-to-end speech synthesis
[3]  
Arik SÖ, 2017, ADV NEUR IN, V30
[4]  
Arik SÖ, 2018, ADV NEUR IN, V31
[5]  
Chen Y., 2018, ARXIV180910460
[6]  
Chung Y.-A., 2018, ARXIV180810128
[7]   SIGNAL ESTIMATION FROM MODIFIED SHORT-TIME FOURIER-TRANSFORM [J].
GRIFFIN, DW ;
LIM, JS .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1984, 32 (02) :236-243
[8]  
Jia Y, 2018, ADV NEUR IN, V31
[9]  
Kominek J., 2004, PROC SSW, P223
[10]  
Leonardo B., 2004, P 5 ISCA TUT RES WOR