LEARNING LATENT REPRESENTATIONS FOR STYLE CONTROL AND TRANSFER IN END-TO-END SPEECH SYNTHESIS

被引:0
作者
Zhang, Ya-Jie [1 ,3 ]
Pan, Shifeng [2 ]
He, Lei [2 ]
Ling, Zhen-Hua [1 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Speech & Language Informat Proc, Hefei, Anhui, Peoples R China
[2] Microsoft China, Beijing, Peoples R China
[3] Microsoft STC Asia, Beijing, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2019年
关键词
unsupervised learning; variational autoencoder; style transfer; speech synthesis;
D O I
10.1109/icassp.2019.8683623
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper, we introduce the Variational Autoencoder (VAE) to an end-to-end speech synthesis model, to learn the latent representation of speaking styles in an unsupervised manner. The style representation learned through VAE shows good properties such as disentangling, scaling, and combination, which makes it easy for style control. Style transfer can be achieved in this framework by first inferring style representation through the recognition network of VAE, then feeding it into TTS network to guide the style in synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in training, several techniques are adopted. Finally, the proposed model shows good performance of style control and outperforms Global Style Token (GST) model in ABX preference tests on style transfer.
引用
收藏
页码:6945 / 6949
页数:5
相关论文
共 19 条
[1]  
Akuzawa K, 2018, INTERSPEECH, P3067
[2]  
[Anonymous], ARXIV180801410
[3]  
[Anonymous], 2016, PROC 9 ISCA SPEEC
[4]  
[Anonymous], P 20 SIGNLL C COMP N
[5]  
[Anonymous], P ICLR 2017
[6]  
[Anonymous], 2018, DEEP VOICE
[7]  
[Anonymous], P INTERSPEECH
[8]  
Burgess Christopher P., 2018, CORR
[9]  
Chorowski J, 2015, ADV NEUR IN, V28
[10]  
Doersch C., 2016, CoRR