LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus

被引:14
作者
Koizumi, Yuma [1 ]
Zen, Heiga [1 ]
Karita, Shigeki [1 ]
Ding, Yifan [1 ]
Yatabe, Kohei [2 ]
Morioka, Nobuyuki [1 ]
Bacchiani, Michiel [1 ]
Zhang, Yu [3 ]
Han, Wei [3 ]
Bapna, Ankur [3 ]
机构
[1] Google, Tokyo, Japan
[2] Tokyo Univ Agr Technol, Tokyo, Japan
[3] Google, Mountain View, CA USA
来源
INTERSPEECH 2023 | 2023年
关键词
Text-to-speech; dataset; speech restoration;
D O I
10.21437/Interspeech.2023-1584
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper introduces a new speech dataset called "LibriTTS-R" designed for text-to-speech (TTS) use. It is derived by applying speech restoration to the LibriTTS corpus, which consists of 585 hours of speech data at 24 kHz sampling rate from 2,456 speakers and the corresponding texts. The constituent samples of LibriTTS-R are identical to those of LibriTTS, with only the sound quality improved. Experimental results show that the LibriTTS-R ground-truth samples showed significantly improved sound quality compared to those in LibriTTS. In addition, neural end-to-end TTS trained with LibriTTS-R achieved speech naturalness on par with that of the ground-truth samples. The corpus is freely available for download from http: //www.openslr.org/141/.
引用
收藏
页码:5496 / 5500
页数:5
相关论文
共 50 条
[31]   GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis [J].
Yang, Jinhyeok ;
Bae, Jae-Sung ;
Bak, Taejun ;
Kim, Young-Ik ;
Cho, Hoon-Young .
INTERSPEECH 2021, 2021, :2202-2206
[32]   MULTI-SPEAKER EMOTIONAL ACOUSTIC MODELING FOR CNN-BASED SPEECH SYNTHESIS [J].
Choi, Heejin ;
Park, Sangjun ;
Park, Jinuk ;
Hahn, Minsoo .
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, :6950-6954
[33]   An objective evaluation of the effects of recording conditions and speaker characteristics in multi-speaker deep neural speech synthesis [J].
Lorincz, Beata ;
Stan, Adriana ;
Giurgiu, Mircea .
KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 :756-765
[34]   A set of corpus-based text-to-speech synthesis technologies for Mandarin Chinese [J].
Chou, FC ;
Tseng, CY ;
Lee, LS .
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 2002, 10 (07) :481-494
[35]   Optimization for Low-Resource Speaker Adaptation in End-to-End Text-to-Speech [J].
Hong, Changi ;
Lee, Jung Hyuk ;
Jeon, Moongu ;
Kim, Hong Kook .
2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, :1060-1061
[36]   1000 African Voices: Advancing inclusive multi-speaker multi-accent speech synthesis [J].
Ogun, Sewade ;
Owodunni, Abraham T. ;
Olatunji, Tobi ;
Alese, Eniola ;
Oladimeji, Babatunde ;
Afonja, Tejumade ;
Olaleye, Kayode ;
Etorr, Naome A. ;
Adewumi, Tosin .
INTERSPEECH 2024, 2024, :1855-1859
[37]   Cross-lingual multi-speaker speech synthesis with limited bilingual training data [J].
Cai, Zexin ;
Yang, Yaogen ;
Li, Ming .
COMPUTER SPEECH AND LANGUAGE, 2023, 77
[38]   BOOTSTRAPPING NON-PARALLEL VOICE CONVERSION FROM SPEAKER-ADAPTIVE TEXT-TO-SPEECH [J].
Luong, Hieu-Thi ;
Yamagishi, Junichi .
2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, :200-207
[39]   The paradigm for creating multi-lingual text-to-speech voice databases [J].
Chu, Min ;
Zhao, Yong ;
Chen, Yining ;
Wang, Lijuan ;
Soong, Frank .
CHINESE SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, 2006, 4274 :736-+
[40]   Content-Dependent Fine-Grained Speaker Embedding for Zero-Shot Speaker Adaptation in Text-to-Speech Synthesis [J].
Zhou, Yixuan ;
Song, Changhe ;
Li, Xiang ;
Zhang, Luwen ;
Wu, Zhiyong ;
Bian, Yanyao ;
Su, Dan ;
Meng, Helen .
INTERSPEECH 2022, 2022, :2573-2577