Multi-target ensemble learning based speech enhancement with temporal-spectral structured target

被引:2
作者
Wang, Wenbo [1 ]
Guo, Weiwei [2 ,3 ,4 ]
Liu, Houguang [1 ]
Yang, Jianhua [1 ]
Liu, Songyong [1 ]
机构
[1] China Univ Min & Technol, Sch Mechatron Engn, Xuzhou 221116, Peoples R China
[2] Chinese Peoples Liberat Army Gen Hosp, Coll Otolaryngol Head & Neck Surg, Beijing 100853, Peoples R China
[3] Natl Clin Res Ctr Otolaryngol Dis, Beijing 100853, Peoples R China
[4] Minist Educ, Key Lab Hearing Sci, Beijing 100853, Peoples R China
关键词
Speech enhancement; Temporal -spectral structured target; Multi -target ensemble learning; Sparse nonnegative matrix factorization; RECURRENT NEURAL-NETWORKS; TRAINING TARGETS; NOISE; SEPARATION; FEATURES; QUALITY; BINARY; INTELLIGIBILITY; RECOGNITION; ALGORITHM;
D O I
10.1016/j.apacoust.2023.109268
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recently, deep neural network (DNN)-based speech enhancement has shown considerable success, and mapping-based and masking-based are the two most commonly used methods. However, these methods do not consider the spectrum structures of signal. In this paper, a novel structured multi-target ensemble learning (SMTEL) framework is proposed, which uses target temporal-spectral structures to improve speech quality and intelligibility. First, the basis matrices of clean speech, noise, and ideal ratio mask (IRM) are captured by the sparse nonnegative matrix factorization, which contain the basic structures of the signal. Second, the basis matrices are co-trained with a multi-target DNN to estimate the activation matrices instead of directly estimating the targets. Then a joint training single layer perceptron is pro-posed to integrate the two targets and further improve speech quality and intelligibility. The sequential floating forward selection method is used to systematically analyze the impact of the integrated targets on enhanced performance, and analyze the effect of the target weights on the results. Finally, the pro-posed method with progressive learning is combined to improve the enhanced performance. Systematic experiments on the UW/NU corpus show that the proposed method achieves the best enhancement effect in the case of low network cost and complexity, especially in visible nonstationary noise environment. Compared with the target integration method which does not use structured targets and the long short-term memory masking method, the speech quality of the proposed method is improved by 25.6 % and 29.2 % of restaurant noise, and the speech intelligibility is improved by 35.5 % and 15.8 %, respectively.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 59 条
  • [51] Speech enhancement based on noise classification and deep neural network
    Wang, Wenbo
    Liu, Houguang
    Yang, Jianhua
    Cao, Guohua
    Hua, Chunli
    [J]. MODERN PHYSICS LETTERS B, 2019, 33 (17):
  • [52] On Training Targets for Supervised Speech Separation
    Wang, Yuxuan
    Narayanan, Arun
    Wang, DeLiang
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (12) : 1849 - 1858
  • [53] Exploring Monaural Features for Classification-Based Speech Segregation
    Wang, Yuxuan
    Han, Kun
    Wang, DeLiang
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (02): : 270 - 279
  • [54] LSTM-convolutional-BLSTM encoder-decoder network for minimum mean-square error approach to speech enhancement
    Wang, Zeyu
    Zhang, Tao
    Shao, Yangyang
    Ding, Biyun
    [J]. APPLIED ACOUSTICS, 2021, 172 (172)
  • [55] Reconstruction techniques for improving the perceptual quality of binary masked speech
    Williamson, Donald S.
    Wang, Yuxuan
    Wang, DeLiang
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2014, 136 (02) : 892 - 902
  • [56] A Regression Approach to Speech Enhancement Based on Deep Neural Networks
    Xu, Yong
    Du, Jun
    Dai, Li-Rong
    Lee, Chin-Hui
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2015, 23 (01) : 7 - 19
  • [57] Multi-target Ensemble Learning for Monaural Speech Separation
    Zhang, Hui
    Zhang, Xueliang
    Gao, Guanglai
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1958 - 1962
  • [58] A Deep Ensemble Learning Method for Monaural Speech Separation
    Zhang, Xiao-Lei
    Wang, DeLiang
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2016, 24 (05) : 967 - 977
  • [59] Phase-Aware Speech Enhancement Based on Deep Neural Networks
    Zheng, Naijun
    Zhang, Xiao-Lei
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (01) : 63 - 76