Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis

被引:75
作者
Wang, Xin [1 ]
Takaki, Shinji [2 ]
Yamagishi, Junichi [1 ,3 ]
机构
[1] Natl Inst Informat, Tokyo 1018340, Japan
[2] Nagoya Inst Technol, Nagoya, Aichi 4668555, Japan
[3] Univ Edinburgh, Ctr Speech Technol Res, Edinburgh EH8 9YL, Midlothian, Scotland
关键词
Training; Mathematical model; Acoustics; Computational modeling; Speech synthesis; Neural networks; neural network; waveform model; short-time Fourier transform; IDENTIFICATION;
D O I
10.1109/TASLP.2019.2956145
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Neural waveform models have demonstrated better performance than conventional vocoders for statistical parametric speech synthesis. One of the best models, called WaveNet, uses an autoregressive (AR) approach to model the distribution of waveform sampling points, but it has to generate a waveform in a time-consuming sequential manner. Some new models that use inverse-autoregressive flow (IAF) can generate a whole waveform in a one-shot manner but require either a larger amount of training time or a complicated model architecture plus a blend of training criteria. As an alternative to AR and IAF-based frameworks, we propose a neural source-filter (NSF) waveform modeling framework that is straightforward to train and fast to generate waveforms. This framework requires three components to generate waveforms: a source module that generates a sine-based signal as excitation, a non-AR dilated-convolution-based filter module that transforms the excitation into a waveform, and a conditional module that pre-processes the input acoustic features for the source and filter modules. This framework minimizes spectral-amplitude distances for model training, which can be efficiently implemented using short-time Fourier transform routines. As an initial NSF study, we designed three NSF models under the proposed framework and compared them with WaveNet using our deep learning toolkit. It was demonstrated that the NSF models generated waveforms at least 100 times faster than our WaveNet-vocoder, and the quality of the synthetic speech from the best NSF model was comparable to that from WaveNet on a large single-speaker Japanese speech corpus.
引用
收藏
页码:402 / 415
页数:14
相关论文
共 60 条
[1]  
Aaron~ van den Oord Yazhe Li, 2018, PR MACH LEARN RES, P3918
[2]  
Ai Y, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5659, DOI 10.1109/ICASSP.2018.8461878
[3]  
[Anonymous], 1937, BELL SYST TECH J
[4]  
[Anonymous], 2015, J MACH LEARN RES
[5]  
[Anonymous], 2019, INT CONF ACOUST SPEE
[6]  
[Anonymous], 1978, J ACOUST SOC AM
[7]  
[Anonymous], 2018, 2018 IEEE INT C AC
[8]  
[Anonymous], 2018, 2018 IEEE INT C AC
[9]  
Bell N., 2011, GPU COMPUTING GEMS J, P359
[10]   Variable frequency electric circuit theory with application to the theory of frequency-modulation [J].
Carson, JR ;
Fry, TC .
BELL SYSTEM TECHNICAL JOURNAL, 1937, 16 :513-540