ZERO-SHOT TEXT-TO-SPEECH SYNTHESIS CONDITIONED USING SELF-SUPERVISED SPEECH REPRESENTATION MODEL

被引:1
作者
Fujita, Kenichi [1 ]
Ashihara, Takanori [1 ]
Kanagawa, Hiroki [1 ]
Moriya, Takafumi [1 ]
Ijima, Yusuke [1 ]
机构
[1] NTT Corp, Tokyo, Japan
来源
2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW | 2023年
关键词
Speech synthesis; self-supervised learning model; speaker embeddings; zero-shot TTS;
D O I
10.1109/ICASSPW59220.2023.10193459
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper proposes a zero-shot text-to-speech (TTS) conditioned by a self-supervised speech-representation model acquired through self-supervised learning (SSL). Conventional methods with embedding vectors from x-vector or global style tokens still have a gap in reproducing the speaker characteristics of unseen speakers. A novel point of the proposed method is the direct use of the SSL model to obtain embedding vectors from speech representations trained with a large amount of data. We also introduce the separate conditioning of acoustic features and a phoneme duration predictor to obtain the disentangled embeddings between rhythm-based speaker characteristics and acoustic-feature-based ones. The disentangled embeddings will enable us to achieve better reproduction performance for unseen speakers and rhythm transfer conditioned by different speeches. Objective and subjective evaluations showed that the proposed method can synthesize speech with improved similarity and achieve speech-rhythm transfer.
引用
收藏
页数:5
相关论文
共 35 条
[1]  
Ando A, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4964, DOI 10.1109/ICASSP.2018.8461299
[2]  
Baevski A., 2020, Advances in Neural Information Processing Systems
[3]   Deep Speaker Embeddings for Short-Duration Speaker Verification [J].
Bhattacharya, Gautam ;
Alam, Jahangir ;
Kenny, Patrick .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :1517-1521
[4]   Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition? [J].
Chen, Sanyuan ;
Wu, Yu ;
Wang, Chengyi ;
Liu, Shujie ;
Chen, Zhuo ;
Wang, Peidong ;
Liu, Gang ;
Li, Jinyu ;
Wu, Jian ;
Yu, Xiangzhan ;
Wei, Furu .
INTERSPEECH 2022, 2022, :3699-3703
[5]  
Chien C.-M., 2021, ICASSP
[6]   In defence of metric learning for speaker recognition [J].
Chung, Joon Son ;
Huh, Jaesung ;
Mun, Seongkyu ;
Lee, Minjae ;
Heo, Hee-Soo ;
Choe, Soyeon ;
Ham, Chiheon ;
Jung, Sunghwan ;
Lee, Bong-Jin ;
Han, Icksang .
INTERSPEECH 2020, 2020, :2977-2981
[7]  
Cooper E, 2020, INT CONF ACOUST SPEE, P6184, DOI [10.1109/icassp40776.2020.9054535, 10.1109/ICASSP40776.2020.9054535]
[8]   Speaker adaptation in DNN-based speech synthesis using d-vectors [J].
Doddipatla, Rama ;
Braunschweiler, Norbert ;
Maia, Ranniery .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :3404-3408
[9]   Phoneme Duration Modeling Using Speech Rhythm-Based Speaker Embeddings for Multi-Speaker Speech Synthesis [J].
Fujita, Kenichi ;
Ando, Atsushi ;
Ijima, Yusuke .
INTERSPEECH 2021, 2021, :3141-3145
[10]  
Gomathi D, 2012, 13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, P694