Multi-View Attention Transfer for Efficient Speech Enhancement

被引:3
|
作者
Shin, Wooseok [1 ]
Park, Hyun Joon [1 ]
Kim, Jin Sob [1 ]
Lee, Byung Hoon [1 ]
Han, Sung Won [1 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul, South Korea
来源
INTERSPEECH 2022 | 2022年
关键词
speech enhancement; multi-view knowledge distillation; feature distillation; time domain; low complexity;
D O I
10.21437/Interspeech.2022-10251
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4 x and 4.71 x fewer parameters and floating-point operations (FLOPS), respectively, compared to the baseline model with similar performance.
引用
收藏
页码:1198 / 1202
页数:5
相关论文
共 50 条
  • [41] Channel and temporal-frequency attention UNet for monaural speech enhancement
    Shiyun Xu
    Zehua Zhang
    Mingjiang Wang
    EURASIP Journal on Audio, Speech, and Music Processing, 2023
  • [42] Speech Enhancement with Fullband-Subband Cross-Attention Network
    Chen, Jun
    Rao, Wei
    Wang, Zilin
    Wu, Zhiyong
    Wang, Yannan
    Yu, Tao
    Shang, Shidong
    Meng, Helen
    INTERSPEECH 2022, 2022, : 976 - 980
  • [43] FullSubNet plus : CHANNEL ATTENTION FULLSUBNET WITH COMPLEX SPECTROGRAMS FOR SPEECH ENHANCEMENT
    Chen, Jun
    Wang, Zilin
    Tuo, Deyi
    Wu, Zhiyong
    Kang, Shiyin
    Meng, Helen
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7857 - 7861
  • [44] Temporal Convolutional Network with Frequency Dimension Adaptive Attention for Speech Enhancement
    Zhang, Qiquan
    Song, Qi
    Nicolson, Aaron
    Lan, Tian
    Li, Haizhou
    INTERSPEECH 2021, 2021, : 166 - 170
  • [45] Channel and temporal-frequency attention UNet for monaural speech enhancement
    Xu, Shiyun
    Zhang, Zehua
    Wang, Mingjiang
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2023, 2023 (01)
  • [46] Embedding Encoder-Decoder With Attention Mechanism for Monaural Speech Enhancement
    Lan, Tian
    Ye, Wenzheng
    Lyu, Yilan
    Zhang, Junyi
    Liu, Qiao
    IEEE ACCESS, 2020, 8 : 96677 - 96685
  • [47] Environmental Attention-Guided Branchy Neural Network for Speech Enhancement
    Zhang, Lu
    Wang, Mingjiang
    Zhang, Qiquan
    Liu, Ming
    APPLIED SCIENCES-BASEL, 2020, 10 (03):
  • [48] Real-Time Speech Enhancement Algorithm Based on Attention LSTM
    Liang, Ruiyu
    Kong, Fanliu
    Xie, Yue
    Tang, Guichen
    Cheng, Jiaming
    IEEE ACCESS, 2020, 8 : 48464 - 48476
  • [49] IMPROVING SPEECH RECOGNITION ON NOISY SPEECH VIA SPEECH ENHANCEMENT WITH MULTI-DISCRIMINATORS CYCLEGAN
    Li, Chia-Yu
    Ngoc Thang Vu
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 830 - 836
  • [50] Multi-stage Progressive Speech Enhancement Network
    Xu, Xinmeng
    Wang, Yang
    Xu, Dongxiang
    Peng, Yiyuan
    Zhang, Cong
    Jia, Jie
    Chen, Binbin
    INTERSPEECH 2021, 2021, : 2691 - 2695