ZET-Speech: Zero-shot adaptive Emotion-controllable Text-to-Speech Synthesis with Diffusion and Style-based Models

被引:2
作者
Kang, Minki [1 ,2 ]
Han, Wooseok [1 ]
Hwang, Sung Ju [2 ]
Yang, Eunho [1 ,2 ]
机构
[1] AITRICS, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
INTERSPEECH 2023 | 2023年
关键词
Text-to-Speech Synthesis; Emotional TTS;
D O I
10.21437/Interspeech.2023-754
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Emotional Text-To-Speech (TTS) is an important task in the development of systems (e.g., human-like dialogue agents) that require natural and emotional speech. Existing approaches, however, only aim to produce emotional TTS for seen speakers during training, without consideration of the generalization to unseen speakers. In this paper, we propose ZET-Speech, a zero-shot adaptive emotion-controllable TTS model that allows users to synthesize any speaker's emotional speech using only a short, neutral speech segment and the target emotion label. Specifically, to enable a zero-shot adaptive TTS model to synthesize emotional speech, we propose domain adversarial learning and guidance methods on the diffusion model. Experimental results demonstrate that ZET-Speech successfully synthesizes natural and emotional speech with the desired emotion for both seen and unseen speakers. Samples are at https: //ZET- Speech.github.io/ZET-Speech-Demo/.
引用
收藏
页码:4339 / 4343
页数:5
相关论文
empty
未找到相关数据