Monaural speech segregation based on pitch tracking and amplitude modulation

被引:283
作者
Hu, GN [1 ]
Wang, DL
机构
[1] Ohio State Univ, Biophys Program, Columbus, OH 43210 USA
[2] Ohio State Univ, Dept Comp & Informat Sci, Columbus, OH 43210 USA
[3] Ohio State Univ, Ctr Cognit Sci, Columbus, OH 43210 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2004年 / 15卷 / 05期
基金
美国国家科学基金会;
关键词
amplitude modulation (AM); computational auditory scene analysis; grouping; monaural speech segregation; pitch tracking; segmentation;
D O I
10.1109/TNN.2004.832812
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Segregating speech from one monaural recording has proven to be very challenging. Monaural segregation of voiced speech has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with the high-frequency part of speech. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. We propose a novel system for voiced speech segregation that segregates resolved and unresolved harmonics differently. For resolved harmonics, the system generates segments based on temporal continuity and cross-channel correlation, and groups them according to their periodicities. For unresolved harmonics, it generates segments based on common amplitude modulation (AM) in addition to temporal continuity and groups them according to AM rates. Underlying the segregation process is a pitch contour that is first estimated from speech segregated according to dominant pitch and then adjusted according to psychoacoustic constraints. Our system is systematically evaluated and compared with pervious systems, and it yields substantially better performance, especially for the high-frequency part of speech.
引用
收藏
页码:1135 / 1150
页数:16
相关论文
共 35 条
[31]  
SLANEY M, 1993, VISUAL REPRESENTATIO, P95
[32]  
Wang DL, 1996, COGNITIVE SCI, V20, P409, DOI 10.1207/s15516709cog2003_3
[33]   Separation of speech from interfering sounds based on oscillatory correlation [J].
Wang, DLL ;
Brown, GJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (03) :684-697
[34]   A multipitch tracking algorithm for noisy speech [J].
Wu, MY ;
Wang, DL ;
Brown, GJ .
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 2003, 11 (03) :229-241
[35]   Development of a Sign Language Dialogue System for a Healing Dialogue Robot [J].
Huang, Xuan ;
Wu, Bo ;
Kameda, Hiroyuki .
2021 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS DASC/PICOM/CBDCOM/CYBERSCITECH 2021, 2021, :867-872