Triple-0: Zero-shot denoising and dereverberation on an end-to-end frozen anechoic speech separation network

被引:0
作者
Gul, Sania [1 ,2 ]
Khan, Muhammad Salman [3 ]
Ur-Rehman, Ata [4 ,5 ]
机构
[1] Univ Engn & Technol, Dept Elect Engn, Peshawar, Pakistan
[2] Univ Engn & Technol, Intelligent Informat Proc Lab, Natl Ctr Artificial Intelligence, Peshawar, Pakistan
[3] Qatar Univ, Coll Engn, Dept Elect Engn, Doha, Qatar
[4] NUST, Dept Elect Engn, MCS, Islamabad, Pakistan
[5] Ravensbourne Univ London, Dept Business & Comp, London, England
关键词
TIME-FREQUENCY MASKING; ROOM ACOUSTICS; NEURAL-NETWORK; INTELLIGIBILITY; DOMAIN; QUALITY; NOISE;
D O I
10.1371/journal.pone.0301692
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Speech enhancement is crucial both for human and machine listening applications. Over the last decade, the use of deep learning for speech enhancement has resulted in tremendous improvement over the classical signal processing and machine learning methods. However, training a deep neural network is not only time-consuming; it also requires extensive computational resources and a large training dataset. Transfer learning, i.e. using a pretrained network for a new task, comes to the rescue by reducing the amount of training time, computational resources, and the required dataset, but the network still needs to be fine-tuned for the new task. This paper presents a novel method of speech denoising and dereverberation (SD&D) on an end-to-end frozen binaural anechoic speech separation network. The frozen network requires neither any architectural change nor any fine-tuning for the new task, as is usually required for transfer learning. The interaural cues of a source placed inside noisy and echoic surroundings are given as input to this pretrained network to extract the target speech from noise and reverberation. Although the pretrained model used in this paper has never seen noisy reverberant conditions during its training, it performs satisfactorily for zero-shot testing (ZST) under these conditions. It is because the pretrained model used here has been trained on the direct-path interaural cues of an active source and so it can recognize them even in the presence of echoes and noise. ZST on the same dataset on which the pretrained network was trained (homo-corpus) for the unseen class of interference, has shown considerable improvement over the weighted prediction error (WPE) algorithm in terms of four objective speech quality and intelligibility metrics. Also, the proposed model offers similar performance provided by a deep learning SD&D algorithm for this dataset under varying conditions of noise and reverberations. Similarly, ZST on a different dataset has provided an improvement in intelligibility and almost equivalent quality as provided by the WPE algorithm.
引用
收藏
页数:19
相关论文
共 84 条
[71]   Investigating Generative Adversarial Networks based Speech Dereverberation for Robust Speech Recognition [J].
Wang, Ke ;
Zhang, Junbo ;
Sun, Sining ;
Wang, Yujun ;
Xiang, Fei ;
Xie, Lei .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :1581-1585
[72]  
Wang ZQ, 2020, IEEE-ACM T AUDIO SPE, V28, P941, DOI [10.1109/taslp.2020.2975902, 10.1109/TASLP.2020.2975902]
[73]   A survey of transfer learning [J].
Weiss K. ;
Khoshgoftaar T.M. ;
Wang D.D. .
Journal of Big Data, 2016, 3 (01)
[74]   Time-Frequency Masking in the Complex Domain for Speech Dereverberation and Denoising [J].
Williamson, Donald S. ;
Wang, DeLiang .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2017, 25 (07) :1492-1501
[75]  
Wu Y., 2023, ICASSP 2023, P1
[76]  
Wu Y., 2023, 2023 IEEE INT C AC S
[77]   Autonomous Emotion Learning in Speech: A View of Zero-Shot Speech Emotion Recognition [J].
Xu, Xinzhou ;
Deng, Jun ;
Cummins, Nicholas ;
Zhang, Zixing ;
Zhao, Li ;
Schuller, Bjorn W. .
INTERSPEECH 2019, 2019, :949-953
[78]   Effects of room acoustics on the intelligibility of speech in classrooms for young children [J].
Yang, W. ;
Bradley, J. S. .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2009, 125 (02) :922-933
[79]  
Yang WX, 2018, INT WORKSH ACOUSTIC, P376, DOI 10.1109/IWAENC.2018.8521286
[80]   Generalization of Multi-Channel Linear Prediction Methods for Blind MIMO Impulse Response Shortening [J].
Yoshioka, Takuya ;
Nakatani, Tomohiro .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (10) :2707-2720