FOCUSING ON ATTENTION: PROSODY TRANSFER AND ADAPTATIVE OPTIMIZATION STRATEGY FOR MULTI-SPEAKER END-TO-END SPEECH SYNTHESIS

被引:0
作者
Fu, Ruibo [1 ,2 ]
Tao, Jianhua [1 ,2 ,3 ]
Wen, Zhengqi [1 ]
Yi, Jiangyan [1 ]
Wang, Tao [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing, Peoples R China
来源
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING | 2020年
基金
中国国家自然科学基金;
关键词
prosody transfer; optimization strategy; speaker adaptation; attention; speech synthesis;
D O I
10.1109/icassp40776.2020.9054319
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
End-to-end speech synthesis can generate high-quality synthetic speech and achieve high similarity scores with low-resource adaptation data. However, the generalization of out-domain texts is still a challenging task. The limited adaptation data leads to unacceptable errors and the poor prosody performance of the synthetic speech. In this paper, we present two novel methods to handle the above problems by focusing on the attention. Firstly, compared with the conventional methods that extract prosody embeddings for conditioning input, a duration controller with feedback mechanism is proposed, which can control the states transition in the sequence-to-sequence model more directly and precisely. Secondly, to alleviate the unmatching text-audio pairs' impact on model, an adaptative optimization strategy which would consider the matching degree of the training sample is also proposed. Experimental results on Mandarin dataset show that proposed methods lead to an improvement on both robustness and overall naturalness.
引用
收藏
页码:6709 / 6713
页数:5
相关论文
共 24 条
  • [1] [Anonymous], IND ENG CHEM RES
  • [2] [Anonymous], 2012, INTRO PSYCHOL HEARIN
  • [3] Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
  • [4] Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
  • [5] Chorowski J. K., 2015, ADV NEURAL INFORM PR, P577, DOI DOI 10.1016/0167-739X(94)90007-8
  • [6] Glorot X., 2010, P 13 INT C ART INT S, P249, DOI DOI 10.1109/LGRS.2016.2565705
  • [7] Graves A., 2013, ARXIV PREPRINT ARXIV
  • [8] He M, 2019, P INTERSPEECH 2019 A
  • [9] Kingma D. P., 2014, ARXIV14126980
  • [10] Li N., 2018, ARXIV180908895