Exemplar-based voice conversion using joint nonnegative matrix factorization

被引:0
作者
Zhizheng Wu
Eng Siong Chng
Haizhou Li
机构
[1] Nanyang Technological University,School of Computer Engineering
[2] University of Edinburgh,Centre for Speech Technology Research
[3] Nanyang Technological University,School of Computer Engineering
[4] Nanyang Technological University,Human Language Technology Department, Institute for Infocomm Research, School of Computer Engineering
来源
Multimedia Tools and Applications | 2015年 / 74卷
关键词
Speech synthesis; Voice conversion; Exemplar; Sparse representation; Nonnegative matrix factorization; Joint nonnegative matrix factorization;
D O I
暂无
中图分类号
学科分类号
摘要
Exemplar-based sparse representation is a nonparametric framework for voice conversion. In this framework, a target spectrum is generated as a weighted linear combination of a set of basis spectra, namely exemplars, extracted from the training data. This framework adopts coupled source-target dictionaries consisting of acoustically aligned source-target exemplars, and assumes they can share the same activation matrix. At runtime, a source spectrogram is factorized as a product of the source dictionary and the common activation matrix, which is applied to the target dictionary to generate the target spectrogram. In practice, either low-resolution mel-scale filter bank energies or high-resolution spectra are adopted in the source dictionary. Low-resolution features are flexible in capturing the temporal information without increasing the computational cost and the memory occupation significantly, while high-resolution spectra contain significant spectral details. In this paper, we propose a joint nonnegative matrix factorization technique to find the common activation matrix using low- and high-resolution features at the same time. In this way, the common activation matrix is able to benefit from low- and high-resolution features directly. We conducted experiments on the VOICES database to evaluate the performance of the proposed method. Both objective and subjective evaluations confirmed the effectiveness of the proposed methods.
引用
收藏
页码:9943 / 9958
页数:15
相关论文
共 37 条
[1]  
Desai S(2010)Spectral mapping using artificial neural networks for voice conversion IEEE Trans Audio Speech Lang Process 18 954-964
[2]  
Black A(2011)Exemplar-based sparse representations for noise robust automatic speech recognition IEEE Trans Audio Speech Lang Process 19 2067-2080
[3]  
Yegnanarayana B(2012)Voice conversion using dynamic kernel partial least squares regression IEEE Trans Audio Speech Lang Process 20 806-817
[4]  
Prahallad K(2010)Voice conversion using partial least squares regression IEEE Trans Audio Speech Lang Process 18 912-921
[5]  
Gemmeke J(1999)Restructuring speech representations using a pitch-adaptive time–frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds Speech Commun 27 187-207
[6]  
Virtanen T(2010)An overview of text-independent speaker recognition: from features to supervectors Speech Commun 52 12-40
[7]  
Hurmalainen A(2014)How do we recognise who is speaking? Front Biosci (Scholar edition) 6 92-146
[8]  
Helander E(2012)Speaking-aid systems using gmm-based voice conversion for electrolaryngeal speech Speech Commun 54 134-562
[9]  
Silén H(2001)Algorithms for non-negative matrix factorization Adv Neural Inf Process Syst 13 556-142
[10]  
Virtanen T(1998)Continuous probabilistic transform for voice conversion IEEE Trans Speech Audio Process 6 131-2235