TFDense-GAN: a generative adversarial network for single-channel speech enhancement

被引:0
作者
Chen, Haoxiang [1 ]
Zhang, Jinxiu [1 ]
Fu, Yaogang [1 ]
Zhou, Xintong [1 ]
Wang, Ruilong [1 ]
Xu, Yanyan [1 ]
Ke, Dengfeng [2 ]
机构
[1] Beijing Forestry Univ, 35 Qinghua East Rd, Beijing 100083, Peoples R China
[2] Beijing Language & Culture Univ, 15 Xueyuan Rd, Beijing 100083, Peoples R China
关键词
Speech enhancement; Time-frequency domain; Generative adversarial network; Improved DenseBlock; Time-frequency transformer; DOMAIN;
D O I
10.1186/s13634-025-01210-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Research indicates that utilizing the spectrum in the time-frequency domain plays a crucial role in speech enhancement tasks, as it can better extract audio features and reduce computational consumption. For the speech enhancement methods in the time-frequency domain, the introduction of attention mechanisms and the application of DenseBlock have yielded promising results. In particular, the Unet architecture, which comprises three main components, the encoder, the decoder, and the bottleneck, employs DenseBlock in both the encoder and the decoder to achieve powerful feature fusion capabilities with fewer parameters. In this paper, in order to enhance the advantages of the aforementioned methods for speech enhancement, we propose a Unet-based time-frequency domain denoising model called TFDense-Net. It utilizes our improved DenseBlock for feature extraction in both the encoder and the decoder and employs an attention mechanism in the bottleneck for feature fusion and denoising. The model has demonstrated excellent performance for speech enhancement tasks, achieving significant improvements in the Si-SDR metric compared to other state-of-the-art models. Additionally, to further enhance the denoising performance and increase the receptive field of the model, we introduce a multi-spectrogram discriminator based on multiple STFTs. Since the discriminator loss can observe the correlations between spectra that traditional loss functions cannot detect, we train TFDense-Net as a generator against the multi-spectrogram discriminator, resulting in a significant improvement in the denoising performance, and we name this enhanced model TFDense-GAN. We evaluate our proposed TFDense-Net and TFDense-GAN on two public datasets: the VCTK + DEMAND dataset and the Interspeech Deep Noise Suppression Challenge dataset. Experimental results show that TFDense-GAN outperforms most existing models in terms of STOI, PESQ, and Si-SDR, achieving state-of-the-art results. The comparison samples of TFDense-GAN and other models can be accessed from https://github.com/yhsjoker/TFDense-GAN.
引用
收藏
页数:24
相关论文
共 40 条
[1]  
Cao R, 2022, arXiv preprint arXiv:2203.15149
[2]   Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation [J].
Chen, Jingjing ;
Mao, Qirong ;
Liu, Dong .
INTERSPEECH 2020, 2020, :2642-2646
[3]   DPT-FSNET: DUAL-PATH TRANSFORMER BASED FULL-BAND AND SUB-BAND FUSION NETWORK FOR SPEECH ENHANCEMENT [J].
Dang, Feng ;
Chen, Hangting ;
Zhangt, Pengyuan .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :6857-6861
[4]  
Defossez Alexandre., 2020, arXiv
[5]  
Fu SW, 2021, Arxiv, DOI arXiv:2104.03538
[6]   FULLSUBNET: A FULL-BAND AND SUB-BAND FUSION MODEL FOR REAL-TIME SINGLE-CHANNEL SPEECH ENHANCEMENT [J].
Hao, Xiang ;
Su, Xiangdong ;
Horaud, Radu ;
Li, Xiaofei .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6633-6637
[7]  
Hu Hu Y. Y., Dccrn: Deep complex convolution recurrent network for phase-aware speech enhancement
[8]   Evaluation of objective quality measures for speech enhancement [J].
Hu, Yi ;
Loizou, Philipos C. .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2008, 16 (01) :229-238
[9]  
Iandola F, 2014, Arxiv, DOI [arXiv:1404.1869, 10.48550/arXiv.1404.1869, DOI 10.48550/ARXIV.1404.1869]
[10]  
Isik U, 2020, Arxiv, DOI arXiv:2008.04470