Triple-0: Zero-shot denoising and dereverberation on an end-to-end frozen anechoic speech separation network

被引:0
作者
Gul, Sania [1 ,2 ]
Khan, Muhammad Salman [3 ]
Ur-Rehman, Ata [4 ,5 ]
机构
[1] Univ Engn & Technol, Dept Elect Engn, Peshawar, Pakistan
[2] Univ Engn & Technol, Intelligent Informat Proc Lab, Natl Ctr Artificial Intelligence, Peshawar, Pakistan
[3] Qatar Univ, Coll Engn, Dept Elect Engn, Doha, Qatar
[4] NUST, Dept Elect Engn, MCS, Islamabad, Pakistan
[5] Ravensbourne Univ London, Dept Business & Comp, London, England
关键词
TIME-FREQUENCY MASKING; ROOM ACOUSTICS; NEURAL-NETWORK; INTELLIGIBILITY; DOMAIN; QUALITY; NOISE;
D O I
10.1371/journal.pone.0301692
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Speech enhancement is crucial both for human and machine listening applications. Over the last decade, the use of deep learning for speech enhancement has resulted in tremendous improvement over the classical signal processing and machine learning methods. However, training a deep neural network is not only time-consuming; it also requires extensive computational resources and a large training dataset. Transfer learning, i.e. using a pretrained network for a new task, comes to the rescue by reducing the amount of training time, computational resources, and the required dataset, but the network still needs to be fine-tuned for the new task. This paper presents a novel method of speech denoising and dereverberation (SD&D) on an end-to-end frozen binaural anechoic speech separation network. The frozen network requires neither any architectural change nor any fine-tuning for the new task, as is usually required for transfer learning. The interaural cues of a source placed inside noisy and echoic surroundings are given as input to this pretrained network to extract the target speech from noise and reverberation. Although the pretrained model used in this paper has never seen noisy reverberant conditions during its training, it performs satisfactorily for zero-shot testing (ZST) under these conditions. It is because the pretrained model used here has been trained on the direct-path interaural cues of an active source and so it can recognize them even in the presence of echoes and noise. ZST on the same dataset on which the pretrained network was trained (homo-corpus) for the unseen class of interference, has shown considerable improvement over the weighted prediction error (WPE) algorithm in terms of four objective speech quality and intelligibility metrics. Also, the proposed model offers similar performance provided by a deep learning SD&D algorithm for this dataset under varying conditions of noise and reverberations. Similarly, ZST on a different dataset has provided an improvement in intelligibility and almost equivalent quality as provided by the WPE algorithm.
引用
收藏
页数:19
相关论文
共 84 条
[1]  
Alinaghi A, 2013, INT CONF ACOUST SPEE, P684, DOI 10.1109/ICASSP.2013.6637735
[2]   IMAGE METHOD FOR EFFICIENTLY SIMULATING SMALL-ROOM ACOUSTICS [J].
ALLEN, JB ;
BERKLEY, DA .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1979, 65 (04) :943-950
[3]  
[Anonymous], DAPRA TIMIT ACOUSTIC
[4]  
Arweiler I., 2009, P INT S AUDITORY AUD, V2, P289
[5]   Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages [J].
Azizah, Kurniawati ;
Jatmiko, Wisnu .
IEEE ACCESS, 2022, 10 :5895-5911
[6]   On the combined effects of signal-to-noise ratio and room acoustics on speech intelligibility [J].
Bradley, JS ;
Reich, RD ;
Norcross, SG .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1999, 106 (04) :1820-1828
[7]  
Castillo-Lpez G., 2023, Tenth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2023), P1
[8]  
Chang YL, 2019, Arxiv, DOI arXiv:1911.06476
[9]  
Chen K, 2022, AAAI CONF ARTIF INTE, P4441
[10]   Modulation-Domain Kalman Filtering for Monaural Blind Speech Denoising and Dereverberation [J].
Dioneli, Nikolaos ;
Brookes, Mike .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (04) :799-814