PROMPTTTS plus plus : CONTROLLING SPEAKER IDENTITY IN PROMPT-BASED TEXT-TO-SPEECH USING NATURAL LANGUAGE DESCRIPTIONS

被引:1
作者
Shimizu, Reo [1 ,2 ]
Yamamoto, Ryuichi [2 ,3 ]
Kawamura, Masaya [2 ,3 ]
Shirahata, Yuma [2 ,3 ]
Doi, Hironori [2 ,3 ]
Komatsu, Tatsuya [2 ,3 ]
Tachibana, Kentaro [2 ,3 ]
机构
[1] Tohoku Univ, Sendai, Miyagi, Japan
[2] LINE Corp, Tokyo, Japan
[3] LY Corp, Tokyo, Japan
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024) | 2024年
关键词
Text-to-speech; speech synthesis; speaker generation; mixture model; diffusion model;
D O I
10.1109/ICASSP48485.2024.10448173
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We propose PromptTTS++, a prompt-based text-to-speech (TTS) synthesis system that allows control over speaker identity using natural language descriptions. To control speaker identity within the prompt-based TTS framework, we introduce the concept of speaker prompt, which describes voice characteristics (e.g., gender-neutral, young, old, and muffled) designed to be approximately independent of speaking style. Since there is no large-scale dataset containing speaker prompts, we first construct a dataset based on the LibriTTS-R corpus with manually annotated speaker prompts. We then employ a diffusion-based acoustic model with mixture density networks to model diverse speaker factors in the training data. Unlike previous studies that rely on style prompts describing only a limited aspect of speaker individuality, such as pitch, speaking speed, and energy, our method utilizes an additional speaker prompt to effectively learn the mapping from natural language descriptions to the acoustic features of diverse speakers. Our subjective evaluation results show that the proposed method can better control speaker characteristics than the methods without the speaker prompt. Audio samples are available at https://reppy4620.github.io/demo.promptttspp/.
引用
收藏
页码:12672 / 12676
页数:5
相关论文
共 32 条
[1]  
Bishop C. M., 1994, MIXTURE DENSITY NETW
[2]  
Bishop CM, 1994, technical report
[3]  
Brown TB., 2020, Advances in neural information processing systems, V33, P1877, DOI [10.48550/arXiv.2005.14165, 10.48550/ARXIV.2005.14165, DOI 10.48550/ARXIV.2005.14165]
[4]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[5]   Phone-Level Prosody Modelling With GMM-Based MDN for Diverse and Controllable Speech Synthesis [J].
Du, Chenpeng ;
Yu, Kai .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 :190-201
[6]   Conformer: Convolution-augmented Transformer for Speech Recognition [J].
Gulati, Anmol ;
Qin, James ;
Chiu, Chung-Cheng ;
Parmar, Niki ;
Zhang, Yu ;
Yu, Jiahui ;
Han, Wei ;
Wang, Shibo ;
Zhang, Zhengdong ;
Wu, Yonghui ;
Pang, Ruoming .
INTERSPEECH 2020, 2020, :5036-5040
[7]   Progress in electrode materials for the industrialization of sodium-ion batteries [J].
Guo, Zhaoxin ;
Qian, Guangdong ;
Wang, Chunying ;
Zhang, Ge ;
Yin, Ruofan ;
Liu, Wei-Di ;
Liu, Rui ;
Chen, Yanan .
PROGRESS IN NATURAL SCIENCE-MATERIALS INTERNATIONAL, 2023, 33 (01) :1-7
[8]  
Ho J, 2020, P 34 INT C NEUR INF, P6840
[9]  
Kido H., 1999, Journal of the Acoustical Society of Japan, V55, P405
[10]  
Koizumi Y., 2023, ARXIV