SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

被引:32
作者
Casanova, Edresson [1 ]
Shulby, Christopher [2 ]
Golge, Eren [3 ]
Muller, Nicolas Michael [4 ]
de Oliveira, Frederico Santos [5 ]
Candido Junior, Arnaldo [6 ]
Soares, Anderson da Silva [5 ]
Aluisio, Sandra Maria [1 ]
Ponti, Moacir Antonelli [1 ]
机构
[1] Univ Sao Paulo, Inst Ciencias Matemat & Comp, Sao Paulo, Brazil
[2] DefinedCrowd Corp, Seattle, WA USA
[3] Coqui, Berlin, Germany
[4] Fraunhofer AISEC, Garching, Germany
[5] Univ Fed Goias, Goiania, Go, Brazil
[6] Univ Tecnol Fed Parana, Curitiba, Parana, Brazil
来源
INTERSPEECH 2021 | 2021年
关键词
zero-shot multi-speaker TTS; text-to-speech; multi-speaker modeling; zero-shot voice conversion;
D O I
10.21437/Interspeech.2021-1774
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In this paper, we propose SC-GlowTTS: an efficient zero-shot multi-speaker text-to-speech model that improves similarity for speakers unseen during training. We propose a speaker-conditional architecture that explores a flow-based decoder that works in a zero-shot scenario. As text encoders, we explore a dilated residual convolutional-based encoder, gated convolutional-based encoder, and transformer-based encoder. Additionally, we have shown that adjusting a GAN-based vocoder for the spectrograms predicted by the TTS model on the training dataset can significantly improve the similarity and speech quality for new speakers. Our model converges using only 11 speakers, reaching state-of-the-art results for similarity with new speakers, as well as high speech quality.
引用
收藏
页码:3645 / 3649
页数:5
相关论文
共 34 条
[1]  
Ardila R., 2019, ARXIV191206670
[2]  
Arik SÖ, 2018, ADV NEUR IN, V31
[3]  
Ba Jimmy Lei, 2016, arXiv, DOI DOI 10.48550/ARXIV.1607.06450
[4]  
Cai W., 2018, P OD SPEAK LANG REC, P74, DOI DOI 10.21437/ODYSSEY.2018-11
[5]   Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding [J].
Choi, Seungwoo ;
Han, Seungju ;
Kim, Dongyoung ;
Ha, Sungjoo .
INTERSPEECH 2020, 2020, :2007-2011
[6]   In defence of metric learning for speaker recognition [J].
Chung, Joon Son ;
Huh, Jaesung ;
Mun, Seongkyu ;
Lee, Minjae ;
Heo, Hee-Soo ;
Choe, Soyeon ;
Ham, Chiheon ;
Jung, Sunghwan ;
Lee, Bong-Jin ;
Han, Icksang .
INTERSPEECH 2020, 2020, :2977-2981
[7]  
Chung JS, 2018, INTERSPEECH, P1086
[8]  
Cooper E, 2020, INT CONF ACOUST SPEE, P6184, DOI [10.1109/ICASSP40776.2020.9054535, 10.1109/icassp40776.2020.9054535]
[9]  
Dauphin YN, 2017, PR MACH LEARN RES, V70
[10]  
Golge E., 2019, GRADUAL TRAINING TAC