PROMPTTTS plus plus : CONTROLLING SPEAKER IDENTITY IN PROMPT-BASED TEXT-TO-SPEECH USING NATURAL LANGUAGE DESCRIPTIONS

被引:1
作者
Shimizu, Reo [1 ,2 ]
Yamamoto, Ryuichi [2 ,3 ]
Kawamura, Masaya [2 ,3 ]
Shirahata, Yuma [2 ,3 ]
Doi, Hironori [2 ,3 ]
Komatsu, Tatsuya [2 ,3 ]
Tachibana, Kentaro [2 ,3 ]
机构
[1] Tohoku Univ, Sendai, Miyagi, Japan
[2] LINE Corp, Tokyo, Japan
[3] LY Corp, Tokyo, Japan
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024) | 2024年
关键词
Text-to-speech; speech synthesis; speaker generation; mixture model; diffusion model;
D O I
10.1109/ICASSP48485.2024.10448173
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We propose PromptTTS++, a prompt-based text-to-speech (TTS) synthesis system that allows control over speaker identity using natural language descriptions. To control speaker identity within the prompt-based TTS framework, we introduce the concept of speaker prompt, which describes voice characteristics (e.g., gender-neutral, young, old, and muffled) designed to be approximately independent of speaking style. Since there is no large-scale dataset containing speaker prompts, we first construct a dataset based on the LibriTTS-R corpus with manually annotated speaker prompts. We then employ a diffusion-based acoustic model with mixture density networks to model diverse speaker factors in the training data. Unlike previous studies that rely on style prompts describing only a limited aspect of speaker individuality, such as pitch, speaking speed, and energy, our method utilizes an additional speaker prompt to effectively learn the mapping from natural language descriptions to the acoustic features of diverse speakers. Our subjective evaluation results show that the proposed method can better control speaker characteristics than the methods without the speaker prompt. Audio samples are available at https://reppy4620.github.io/demo.promptttspp/.
引用
收藏
页码:12672 / 12676
页数:5
相关论文
共 32 条
[21]  
Shen J, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4779, DOI 10.1109/ICASSP.2018.8461368
[22]   SPEAKER GENERATION [J].
Stanton, Daisy ;
Shannon, Matt ;
Mariooryad, Soroosh ;
Skerry-Ryan, R. J. ;
Battenberg, Eric ;
Bagby, Tom ;
Kao, David .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :7897-7901
[23]  
Sui Y, 2018, PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS AND COMPUTER AIDED EDUCATION (ICISCAE 2018), P180, DOI 10.1109/ICISCAE.2018.8666897
[24]  
Tan X., 2021, ARXIV
[25]  
Tan X., 2022, ARXIV
[26]  
Touvron H., 2023, arXiv
[27]  
Vaswani A, 2017, Advances in neural information processing systems, P5998, DOI [10.48550/arXiv.1706.03762, DOI 10.48550/ARXIV.1706.03762]
[28]   Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis [J].
Wang, Xin ;
Takaki, Shinji ;
Yamagishi, Junichi .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 :402-415
[29]  
Yamato O., 2016, Toshiba Review, V71, P80
[30]  
Yang Dongchao, 2023, arXiv