NON-AUTOREGRESSIVE SEQUENCE-TO-SEQUENCE VOICE CONVERSION

被引:13
作者
Hayashi, Tomoki [1 ,2 ]
Huang, Wen-Chin [2 ]
Kobayashi, Kazuhiro [1 ,2 ]
Toda, Tomoki [2 ]
机构
[1] TARVO Inc, Nagoya, Aichi, Japan
[2] Nagoya Univ, Nagoya, Aichi, Japan
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) | 2021年
关键词
Voice conversion; non-autoregressive; sequenceto-sequence; Transformer; Conformer;
D O I
10.1109/ICASSP39728.2021.9413973
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper proposes a novel voice conversion (VC) method based on non-autoregressive sequence-to-sequence (NAR-S2S) models. Inspired by the great success of NAR-S2S models such as FastSpeech in text-to-speech (TTS), we extend the FastSpeech2 model for the VC problem. We introduce the convolution-augmented Transformer (Conformer) instead of the Transformer, making it possible to capture both local and global context information from the input sequence. Furthermore, we extend variance predictors to variance converters to explicitly convert the source speaker's prosody components such as pitch and energy into the target speaker. The experimental evaluation with the Japanese speaker dataset, which consists of male and female speakers of 1,000 utterances, demonstrates that the proposed model enables us to perform more stable, faster, and better conversion than autoregressive S2S (AR-S2S) models such as Tacotron2 and Transformer.
引用
收藏
页码:7068 / 7072
页数:5
相关论文
共 34 条
[1]  
[Anonymous], 2014, INTERSPEECH
[2]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[3]  
Dai ZH, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2978
[4]  
Dauphin YN, 2017, PR MACH LEARN RES, V70
[5]   Conformer: Convolution-augmented Transformer for Speech Recognition [J].
Gulati, Anmol ;
Qin, James ;
Chiu, Chung-Cheng ;
Parmar, Niki ;
Zhang, Yu ;
Yu, Jiahui ;
Han, Wei ;
Wang, Shibo ;
Zhang, Zhengdong ;
Wu, Yonghui ;
Pang, Ruoming .
INTERSPEECH 2020, 2020, :5036-5040
[6]  
Hayashi T., 2019, OECC 2019
[7]  
Hayashi T., 2020, KAN BAYASHI NONARSEQ
[8]  
Hayashi T, 2020, INT CONF ACOUST SPEE, P7654, DOI [10.1109/ICASSP40776.2020.9053512, 10.1109/icassp40776.2020.9053512]
[9]   Improving the intelligibility of dysarthric speech [J].
Kain, Alexander B. ;
Hosom, John-Paul ;
Niu, Xiaochuan ;
van Santen, Jan P. H. ;
Fried-Oken, Melanie ;
Staehely, Janice .
SPEECH COMMUNICATION, 2007, 49 (09) :743-759
[10]  
Kameoka H., 2018, ARXIV181101609