NASTAR: Noise Adaptive Speech Enhancement with Target-Conditional Resampling

被引:0
作者
Lee, Chi-Chang [1 ,2 ]
Hu, Cheng-Hung [3 ]
Lin, Yu-Chen [1 ,2 ]
Chen, Chu-Song [1 ,2 ,3 ]
Wang, Hsin-Min [3 ]
Tsao, Yu [2 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
来源
INTERSPEECH 2022 | 2022年
关键词
speech enhancement; noise adaptation; contrastive learning; source separation; acoustic retrieval;
D O I
10.21437/Interspeech.2022-527
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
For deep learning-based speech enhancement (SE) systems, the training-test acoustic mismatch can cause notable performance degradation. To address the mismatch issue, numerous noise adaptation strategies have been derived. In this paper, we propose a novel method, called noise adaptive speech enhancement with target-conditional resampling (NASTAR), which reduces mismatches with only one sample (one-shot) of noisy speech in the target environment. NASTAR uses a feedback mechanism to simulate adaptive training data via a noise extractor and a retrieval model. The noise extractor estimates the target noise from the noisy speech, called pseudo-noise. The noise retrieval model retrieves relevant noise samples from a pool of noise signals according to the noisy speech, called relevant-cohort. The pseudo-noise and the relevant-cohort set are jointly sampled and mixed with the source speech corpus to prepare simulated training data for noise adaptation. Experimental results show that NASTAR can effectively use one noisy speech sample to adapt an SE model to a target condition. Moreover, both the noise extractor and the noise retrieval model contribute to model adaptation. To our best knowledge, NASTAR is the first work to perform one-shot noise adaptation through noise extraction and retrieval.
引用
收藏
页码:1183 / 1187
页数:5
相关论文
共 43 条
[1]  
[Anonymous], 2018, IEEE ACM T AUDIO SPE
[2]  
[Anonymous], P INT
[3]  
[Anonymous], 2013, PROC 21 ACM INT C MU, DOI 10.1145/2502081.2502245
[4]   Adaptive Neural Speech Enhancement with a Denoising Variational Autoencoder [J].
Bando, Yoshiaki ;
Sekiguchi, Kouhei ;
Yoshii, Kazuyoshi .
INTERSPEECH 2020, 2020, :2437-2441
[5]  
Chazan S. E., 2017, P WASPAA
[6]   Noise perturbation for supervised speech separation [J].
Chen, Jitong ;
Wang, Yuxuan ;
Wang, DeLiang .
SPEECH COMMUNICATION, 2016, 78 :1-10
[7]  
Chen T., 2020, P ICML
[8]  
Defossez A., 2020, P INT
[9]  
Defossez A., 2019, Music source separation in the waveform domain
[10]  
Erdogan H, 2015, INT CONF ACOUST SPEE, P708, DOI 10.1109/ICASSP.2015.7178061