Unsupervised Learning For Sequence-to-sequence Text-to-speech For Low-resource Languages

被引:14
作者
Zhang, Haitong [1 ]
Lin, Yue [1 ]
机构
[1] NetEase Games AI Lab, Hangzhou, Peoples R China
来源
INTERSPEECH 2020 | 2020年
关键词
unsupervised learning; sequence-to-sequence text-to-speech; low-resource languages;
D O I
10.21437/Interspeech.2020-1403
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Recently, sequence-to-sequence models with attention have been successfully applied in Text-to-speech (TTS). These models can generate near-human speech with a large accurately-transcribed speech corpus. However, preparing such a large data-set is both expensive and laborious. To alleviate the problem of heavy data demand, we propose a novel unsupervised pre-training mechanism in this paper. Specifically, we first use Vector-quantization Variational-Autoencoder (VQ-VAE) to extract the unsupervised linguistic units from large-scale, publicly found, and untranscribed speech. We then pre-train the sequence-to-sequence TTS model by using the <unsupervised linguistic units, audio> pairs. Finally, we fine-tune the model with a small amount of < text, audio > paired data from the target speaker. As a result, both objective and subjective evaluations show that our proposed method can synthesize more intelligible and natural speech with the same amount of paired training data. Besides, we extend our proposed method to the hypothesized low-resource languages and verify the effectiveness of the method using objective evaluation.
引用
收藏
页码:3161 / 3165
页数:5
相关论文
共 35 条
[1]  
[Anonymous], 2012, 2012 11 INT C INF SC
[2]  
[Anonymous], 2018, INT C LEARN REPR
[3]  
[Anonymous], 2017, Char2wav: End-to-end speech synthesis
[4]  
Ardila R, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P4218
[5]  
Arik SÖ, 2018, ADV NEUR IN, V31
[6]  
Chen Y., 2018, INT C LEARN REPR
[7]   End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning [J].
Chen, Yuan-Jui ;
Tu, Tao ;
Yeh, Cheng-chieh ;
Lee, Hung-yi .
INTERSPEECH 2019, 2019, :2075-2079
[8]   Unsupervised Speech Representation Learning Using WaveNet Autoencoders [J].
Chorowski, Jan ;
Weiss, Ron J. ;
Bengio, Samy ;
van den Oord, Aaron .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (12) :2041-2053
[9]   Modeling of Motion Simulation of Welding Robot Manipulator with External Force Interaction [J].
Chung, Hyun-Joon ;
Jung, Eui-Jung ;
Chung, Goobong ;
Ryu, Jae-Kwan ;
Jeon, Do Hyung ;
Lee, Jae Chang .
PROCEEDINGS OF 2019 5TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND ROBOTICS ENGINEERING (ICMRE 2019), 2019, :146-149
[10]  
Chung YA, 2019, INT CONF ACOUST SPEE, P6940, DOI 10.1109/ICASSP.2019.8683862