Singing Voice Synthesis Using Deep Autoregressive Neural Networks for Acoustic Modeling

被引:16
作者
Yi, Yuan-Hao [1 ]
Ai, Yang [1 ]
Ling, Zhen-Hua [1 ]
Dai, Li-Rong [1 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Speech & Language Informat Proc, Hefei, Peoples R China
来源
INTERSPEECH 2019 | 2019年
基金
国家重点研发计划;
关键词
singing voice synthesis; deep autoregressive model; self-attention; recurrent neural network;
D O I
10.21437/Interspeech.2019-1563
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
This paper presents a method of using autoregressive neural networks for the acoustic modeling of singing voice synthesis (SVS). Singing voice differs from speech and it contains more local dynamic movements of acoustic features, e.g., vibratos. Therefore, our method adopts deep autoregressive (DAR) models to predict the F0 and spectral features of singing voice in order to better describe the dependencies among the acoustic features of consecutive frames. For F0 modeling, discretized F0 values are used and the influences of the history length in DAR are analyzed by experiments. An F0 post-processing strategy is also designed to alleviate the inconsistency between the predicted F0 contours and the F0 values determined by music notes. Furthermore, we extend the DAR model to deal with continuous spectral features, and a prenet module with self-attention layers is introduced to process historical frames. Experiments on a Chinese singing voice corpus demonstrate that our method using DARs can produce F0 contours with vibratos effectively, and can achieve better objective and subjective performance than the conventional method using recurrent neural networks (RNNs).
引用
收藏
页码:2593 / 2597
页数:5
相关论文
共 23 条
[1]  
Ai Y., 2019, 2019 IEEE INT C AC S
[2]   A Neural Parametric Singing Synthesizer [J].
Blaauw, Merlijn ;
Bonada, Jordi .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :4001-4005
[3]   Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016 [J].
Bonada, Jordi ;
Umbert, Marti ;
Blaauw, Merlijn .
17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, :1230-1234
[4]  
Heiga Zen, 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P3844, DOI 10.1109/ICASSP.2014.6854321
[5]  
Hung-Yan Gu, 2016, 2016 International Conference on Machine Learning and Cybernetics (ICMLC). Proceedings, P654, DOI 10.1109/ICMLC.2016.7872965
[6]  
Ioffe S, 2015, 32 INT C MACH LEARN
[7]  
Kalchbrenner N., 2018, PMLR, V80, P2415, DOI DOI 10.48550/ARXIV.1802.08435
[8]   Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction:: Possible role of a repetitive structure in sounds [J].
Kawahara, H ;
Masuda-Katsuse, I ;
de Cheveigné, A .
SPEECH COMMUNICATION, 1999, 27 (3-4) :187-207
[9]  
Kim J., 2018, INTERSPEECH 2018
[10]  
Kingma DP, 2014, ADV NEUR IN, V27