LOW-RESOURCE EXPRESSIVE TEXT-TO-SPEECH USING DATA AUGMENTATION

被引:24
作者
Huybrechts, Goeric [1 ]
Merritt, Thomas [1 ]
Comini, Giulia [1 ]
Perz, Bartek [1 ]
Shah, Raahil [1 ]
Lorenzo-Trueba, Jaime [1 ]
机构
[1] Amazon Alexa TTS Res, Cambridge, England
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) | 2021年
关键词
Text-to-speech; low-resource; data augmentation; expressive speech;
D O I
10.1109/ICASSP39728.2021.9413466
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
While recent neural text-to-speech (TTS) systems perform remarkably well, they typically require a substantial amount of recordings from the target speaker reading in the desired speaking style. In this work, we present a novel 3-step methodology to circumvent the costly operation of recording large amounts of target data in order to build expressive style voices with as little as 15 minutes of such recordings. First, we augment data via voice conversion by leveraging recordings in the desired speaking style from other speakers. Next, we use that synthetic data on top of the available recordings to train a TTS model. Finally, we fine-tune that model to further increase quality. Our evaluations show that the proposed changes bring significant improvements over non-augmented models across many perceived aspects of synthesised speech. We demonstrate the proposed approach on 2 styles (newscaster and conversational), on various speakers, and on both single and multi-speaker models, illustrating the robustness of our approach.(1)
引用
收藏
页码:6593 / 6597
页数:5
相关论文
共 29 条
[1]  
Aaron~ van den Oord Yazhe Li, 2018, PR MACH LEARN RES, P3918
[2]  
[Anonymous], 2017, Char2wav: End-to-end speech synthesis
[3]  
Arik SÖ, 2017, ADV NEUR IN, V30
[4]   End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning [J].
Chen, Yuan-Jui ;
Tu, Tao ;
Yeh, Cheng-chieh ;
Lee, Hung-yi .
INTERSPEECH 2019, 2019, :2075-2079
[5]  
Chung YA, 2019, INT CONF ACOUST SPEE, P6940, DOI 10.1109/ICASSP.2019.8683862
[6]   Voice Conversion from Unaligned Corpora using Variational Autoencoding Wasserstein Generative Adversarial Networks [J].
Hsu, Chin-Cheng ;
Hwang, Hsin-Te ;
Wu, Yi-Chiao ;
Tsao, Yu ;
Wang, Hsin-Min .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :3364-3368
[7]  
Kalchbrenner N., 2018, P INT C MACH LEARN, P2410, DOI DOI 10.48550/ARXIV.1802.08435
[8]  
Kaneko T, 2019, INT CONF ACOUST SPEE, P6820, DOI [10.1109/icassp.2019.8682897, 10.1109/ICASSP.2019.8682897]
[9]   CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech [J].
Karlapati, Sri ;
Moinet, Alexis ;
Joly, Arnaud ;
Klimkov, Viacheslav ;
Sciez-Trigueros, Daniel ;
Drugman, Thomas .
INTERSPEECH 2020, 2020, :4387-4391
[10]  
Kingma D. P., 2013, ARXIV13126114