Reverberation Modeling for Source-Filter-based Neural Vocoder

被引:2
作者
Ai, Yang [1 ]
Wang, Xin [2 ]
Yamagishi, Junichi [2 ,3 ]
Ling, Zhen-Hua [1 ]
机构
[1] Univ Sci & Technol China, NELSLIP, Hefei, Peoples R China
[2] Natl Inst Informat, Tokyo, Japan
[3] Univ Edinburgh, CSTR, Edinburgh, Midlothian, Scotland
来源
INTERSPEECH 2020 | 2020年
基金
中国国家自然科学基金;
关键词
reverberation; room impulse response; source-filter-based model; neural vocoder; SPEECH; GENERATION;
D O I
10.21437/Interspeech.2020-1613
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
This paper presents a reverberation module for source-filter-based neural vocoders that improves the performance of reverberant effect modeling. This module uses the output waveform of neural vocoders as an input and produces a reverberant waveform by convolving the input with a room impulse response (RIR). We propose two approaches to parameterizing and estimating the RIR. The first approach assumes a global time-invariant (GTI) RIR and directly learns the values of the RIR on a training dataset. The second approach assumes an utterance-level time-variant (UTV) RIR, which is invariant within one utterance but varies across utterances, and uses another neural network to predict the RIR values. We add the proposed reverberation module to the phase spectrum predictor (PSP) of a HiNet vocoder and jointly train the model. Experimental results demonstrate that the proposed module was helpful for modeling the reverberation effect and improving the perceived quality of generated reverberant speech. The UTV-RIR was shown to be more robust than the GTI-RIR to unknown reverberation conditions and achieved a perceptually better reverberation effect.
引用
收藏
页码:3560 / 3564
页数:5
相关论文
共 34 条
[11]  
Engel J. H., 2020, ARXIV200104643, P1, DOI DOI 10.48550/ARXIV.2001.04643
[12]  
Gaubitch N.D., 2012, Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), P1
[13]  
Gulrajani I., 2017, Advances in Neural Information Processing Systems, P5767
[14]  
GUNNAR F., 1960, Acoustic theory of speech production
[15]  
Hayashi T, 2017, 2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), P712, DOI 10.1109/ASRU.2017.8269007
[16]  
Jeub M., 2015, BLIND REVERBERATION
[17]  
Juvela L, 2018, INTERSPEECH, P2012
[18]  
Kalchbrenner N., 2018, PMLR, P2410
[19]  
Kasi K, 2002, INT CONF ACOUST SPEE, P361
[20]   Statistical voice conversion with WaveNet-based waveform generation [J].
Kobayashi, Kazuhiro ;
Hayashi, Tomoki ;
Tamamori, Akira ;
Toda, Tomoki .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :1138-1142