Multi-resolution auditory cepstral coefficient and adaptive mask for speech enhancement with deep neural network

被引:8
作者
Li, Ruwei [1 ]
Sun, Xiaoyue [1 ]
Liu, Yanan [1 ]
Yang, Dengcai [1 ]
Dong, Liang [2 ]
机构
[1] Beijing Univ Technol, Sch Informat & Commun Engn, Fac Informat Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing, Peoples R China
[2] Baylor Univ, Elect & Comp Engn, Waco, TX 76798 USA
基金
中国国家自然科学基金;
关键词
Speech enhancement; Deep neural network; Multi-resolution auditory cepstral coefficient; Adaptive mask; NOISE;
D O I
10.1186/s13634-019-0618-4
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The performance of the existing speech enhancement algorithms is not ideal in low signal-to-noise ratio (SNR) non-stationary noise environments. In order to resolve this problem, a novel speech enhancement algorithm based on multi-feature and adaptive mask with deep learning is presented in this paper. First, we construct a new feature called multi-resolution auditory cepstral coefficient (MRACC). This feature which is extracted from four cochleagrams of different resolutions can capture the local information and spectrotemporal context and reduce the algorithm complexity. Second, an adaptive mask (AM) which can track noise change for speech enhancement is put forward. The AM can flexibly combine the advantages of an ideal binary mask (IBM) and an ideal ratio mask (IRM) with the change of SNR. Third, a deep neural network (DNN) architecture is used as a nonlinear function to estimate adaptive mask. And the first and second derivatives of MRACC and MRACC are used as the input of the DNN. Finally, the estimated AM is used to weight the noisy speech to achieve enhanced speech. Experimental results show that the proposed algorithm not only further improves speech quality and intelligibility, but also suppresses more noise than the contrast algorithms. In addition, the proposed algorithm has a lower complexity than the contrast algorithms.
引用
收藏
页数:16
相关论文
共 33 条
[11]  
Li Ruwei, 2009, Journal of Data Acquisition & Processing, V24, P362
[12]  
Li Ruwei, 2008, Chinese Journal of Scientific Instrument, V29, P2135
[13]   Ideal ratio mask estimation using deep neural networks for monaural speech segregation in noisy reverberant conditions [J].
Li, Xu ;
Li, Junfeng ;
Yan, Yonghong .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :1203-1207
[14]  
[刘文举 Liu Wenju], 2016, [自动化学报, Acta Automatica Sinica], V42, P819
[15]  
Loizou, 2007, SPEECH ENHANCEMENT T
[16]   Supervised and Unsupervised Speech Enhancement Using Nonnegative Matrix Factorization [J].
Mohammadiha, Nasser ;
Smaragdis, Paris ;
Leijon, Arne .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (10) :2140-2151
[17]  
Narayanan A., 2013, IEEE INT C AC SPEECH, P1520
[18]   A General Flexible Framework for the Handling of Prior Information in Audio Source Separation [J].
Ozerov, Alexey ;
Vincent, Emmanuel ;
Bimbot, Frederic .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (04) :1118-1133
[19]  
Recommendation IT, 2001, Rec. ITU-T P. 862
[20]  
Sun L., 2017, MULTIPLE TARGET DEEP