A Neural Vocoder With Hierarchical Generation of Amplitude and Phase Spectra for Statistical Parametric Speech Synthesis

被引:24
作者
Ai, Yang [1 ]
Ling, Zhen-Hua [1 ]
机构
[1] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Vocoders; Training; Hidden Markov models; Acoustics; Signal generators; Speech synthesis; Neural networks; Vocoder; neural network; amplitude spectrum; phase spectrum; statistical parametric speech synthesis;
D O I
10.1109/TASLP.2020.2970241
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This article presents a neural vocoder named HiNet which reconstructs speech waveforms from acoustic features by predicting amplitude and phase spectra hierarchically. Different from existing neural vocoders such as WaveNet, SampleRNN and WaveRNN which directly generate waveform samples using single neural networks, the HiNet vocoder is composed of an amplitude spectrum predictor (ASP) and a phase spectrum predictor (PSP). The ASP is a simple DNN model which predicts log amplitude spectra (LAS) from acoustic features. The predicted LAS are sent into the PSP for phase recovery. Considering the issue of phase warping and the difficulty of phase modeling, the PSP is constructed by concatenating a neural source-filter (NSF) waveform generator with a phase extractor. We also introduce generative adversarial networks (GANs) into both ASP and PSP. Finally, the outputs of ASP and PSP are combined to reconstruct speech waveforms by short-time Fourier synthesis. Since there are no autoregressive structures in both predictors, the HiNet vocoder can generate speech waveforms with high efficiency. Objective and subjective experimental results show that our proposed HiNet vocoder achieves better naturalness of reconstructed speech than the conventional STRAIGHT vocoder, a 16-bit WaveNet vocoder using open source implementation and an NSF vocoder with similar complexity to the PSP and obtains similar performance with a 16-bit WaveRNN vocoder. We also find that the performance of HiNet is insensitive to the complexity of the neural waveform generator in PSP to some extend. After simplifying its model structure, the time consumed for generating 1 s waveforms of 16 kHz speech using a GPU can be further reduced from 0.34 s to 0.19 s without significant quality degradation.
引用
收藏
页码:839 / 851
页数:13
相关论文
共 50 条
[41]   Statistical parametric speech synthesis for Arabic language using ANN [J].
Ilyes, Rebai ;
BenAyed, Yassine .
2014 1ST INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP 2014), 2014, :452-457
[42]   Phase perception of the glottal excitation and its relevance in statistical parametric speech synthesis [J].
Raitio, Tuomo ;
Juvela, Lauri ;
Suni, Antti ;
Vainio, Martti ;
Alku, Paavo .
SPEECH COMMUNICATION, 2016, 81 :104-119
[43]   A continuous vocoder for statistical parametric speech synthesis and its evaluation using an audio-visual phonetically annotated Arabic corpus [J].
Al-Radhi, Mohammed Salah ;
Abdo, Omnia ;
Csapo, Tamas Gabor ;
Abdou, Sherif ;
Nemeth, Geza ;
Fashal, Mervat .
COMPUTER SPEECH AND LANGUAGE, 2020, 60
[44]   Product of Experts for Statistical Parametric Speech Synthesis [J].
Zen, Heiga ;
Gales, Mark J. F. ;
Nankaku, Yoshihiko ;
Tokuda, Keiichi .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (03) :794-805
[45]   Autoregressive Models for Statistical Parametric Speech Synthesis [J].
Shannon, Matt ;
Zen, Heiga ;
Byrne, William .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (03) :587-597
[46]   VOICE SOURCE MODELLING USING DEEP NEURAL NETWORKS FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS [J].
Raitio, Tuomo ;
Lu, Heng ;
Kane, John ;
Suni, Antti ;
Vainio, Martti ;
King, Simon ;
Alku, Paavo .
2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2014, :2290-2294
[47]   SPECTRAL MODELING USING NEURAL AUTOREGRESSIVE DISTRIBUTION ESTIMATORS FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS [J].
Yin, Xiang ;
Ling, Zhen-Hua ;
Dai, Li-Rong .
2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
[48]   Multiple Feed-forward Deep Neural Networks for Statistical Parametric Speech Synthesis [J].
Takaki, Shinji ;
Kim, SangJin ;
Yamagishi, Junichi ;
Kim, JongJin .
16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, :2242-2246
[49]   Time-domain envelope modulating the noise component of excitation in a continuous residual-based vocoder for statistical parametric speech synthesis [J].
Al-Radhi, Mohammed Salah ;
Csapo, Tamas Gabor ;
Nemeth, Geza .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :434-438
[50]   On the impact of excitation and spectral parameters for expressive statistical parametric speech synthesis [J].
Maia, Ranniery ;
Akamine, Masami .
COMPUTER SPEECH AND LANGUAGE, 2014, 28 (05) :1209-1232