Monaural speech segregation based on fusion of source-driven with model-driven techniques

被引:19
作者
Radfar, Mohammad H.
Dansereau, Richard M.
Sayadiyan, Abolghasem
机构
[1] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
[2] Amirkabir Univ Technol, Dept Elect Engn, Tehran 15875 4413, Iran
基金
加拿大自然科学与工程研究理事会;
关键词
speech processing; monaural speech segregation; CASA; speech coding; harmonic modelling; vector quantization; envelope extraction; multi-pitch tracking; MIXMAX estimator;
D O I
10.1016/j.specom.2007.04.007
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper by exploiting the prevalent methods in speech coding and synthesis, a new single channel speech segregation technique is presented. The technique integrates a model-driven method with a source-driven method to take advantage of both individual approaches and reduce their pitfalls significantly. We apply harmonic modelling in which the pitch and spectrum envelope are the main components for the analysis and synthesis stages. Pitch values of two speakers are obtained by using a source-driven method. The spectrum envelope, is obtained by using a new model-driven technique consisting of four components: a trained codebook of the vector quantized envelopes (VQ-based separation), a mixture-maximum approximation (MIXMAX), minimum mean square error estimator (MMSE), and a harmonic synthesizer. In contrast with previous model-driven techniques, this approach is speaker independent and can separate out the unvoiced regions as well as suppress the crosstalk effect which both are the drawbacks of source-driven or equivalently computational auditory scene analysis (CASA) models. We compare our fused model with both model- and source-driven techniques by conducting subjective and objective experiments. The results show that although for the speaker-dependent case, model-based separation delivers the best quality, for a speaker independent scenario the integrated model outperforms the individual approaches. This result supports the idea that the human auditory system takes on both grouping cues (e.g., pitch tracking) and a priori knowledge (e.g., trained quantized envelopes) to segregate speech signals. (C) 2007 Elsevier B.V. All rights reserved.
引用
收藏
页码:464 / 476
页数:13
相关论文
共 49 条
[1]   Decoding speech in the presence of other sources [J].
Barker, JP ;
Cooke, MP ;
Ellis, DPW .
SPEECH COMMUNICATION, 2005, 45 (01) :5-25
[2]  
Beierholm T, 2004, 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL V, PROCEEDINGS, P529
[3]   AN INFORMATION MAXIMIZATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION [J].
BELL, AJ ;
SEJNOWSKI, TJ .
NEURAL COMPUTATION, 1995, 7 (06) :1129-1159
[4]   COMPUTATIONAL AUDITORY SCENE ANALYSIS [J].
BROWN, GJ ;
COOKE, M .
COMPUTER SPEECH AND LANGUAGE, 1994, 8 (04) :297-336
[5]  
Burslem R, 2002, MATER WORLD, V10, P6
[6]  
Chazan D., 1993, ICASSP-93. 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing (Cat. No.92CH3252-4), P728, DOI 10.1109/ICASSP.1993.319415
[7]   Vector quantization of harmonic magnitudes in speech coding applications - A survey and new technique [J].
Chu, WC .
EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, 2004, 2004 (17) :2601-2613
[8]   An audio-visual corpus for speech perception and automatic speech recognition (L) [J].
Cooke, Martin ;
Barker, Jon ;
Cunningham, Stuart ;
Shao, Xu .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2006, 120 (05) :2421-2424
[9]  
De Boor C., 1978, PRACTICAL GUIDE SPLI, DOI DOI 10.1007/978-1-4612-6333-3
[10]   ON THE APPLICATION OF HIDDEN MARKOV-MODELS FOR ENHANCING NOISY SPEECH [J].
EPHRAIM, Y ;
MALAH, D ;
JUANG, BH .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1989, 37 (12) :1846-1856