UFORMER: A UNET BASED DILATED COMPLEX & REAL DUAL-PATH CONFORMER NETWORK FOR SIMULTANEOUS SPEECH ENHANCEMENT AND DEREVERBERATION

被引:35
作者
Fu, Yihui [1 ]
Liu, Yun [2 ]
Li, Jingdong [2 ]
Luo, Dawei [2 ]
Lv, Shubo [1 ]
Jv, Yukai [1 ]
Xie, Lei [1 ]
机构
[1] Northwestern Polytech Univ, Audio Speech & Language Proc Grp ASLP NPU, Xian, Peoples R China
[2] Sogou Inc, AI Interact Div, Beijing, Peoples R China
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
speech enhancement and dereverberation; Uformer; dilated complex dual-path conformer; hybrid encoder and decoder; encoder decoder attention; DOMAIN;
D O I
10.1109/ICASSP43922.2022.9746020
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Complex spectrum and magnitude are considered as two major features of speech enhancement and dereverberation. Traditional approaches always treat these two features separately, ignoring their underlying relationship. In this paper, we propose Uformer, a Unet based dilated complex & real dual-path conformer network in both complex and magnitude domain for simultaneous speech enhancement and dereverberation. We exploit time attention (TA) and dilated convolution (DC) to leverage local and global contextual information and frequency attention (FA) to model dimensional information. These three sub-modules contained in the proposed dilated complex & real dual-path conformer module effectively improve the speech enhancement and dereverberation performance. Furthermore, hybrid encoder and decoder are adopted to simultaneously model the complex spectrum and magnitude and promote the information interaction between two domains. Encoder decoder attention is also applied to enhance the interaction between encoder and decoder. Our experimental results outperform all SOTA time and complex domain models objectively and subjectively. Specifically, Uformer reaches 3.6032 DNSMOS on the blind test set of Interspeech 2021 DNS Challenge, which outperforms all top-performed models. We also carry out ablation experiments to tease apart all proposed submodules that are most important.
引用
收藏
页码:7417 / 7421
页数:5
相关论文
共 32 条
  • [31] Yin DC, 2020, AAAI CONF ARTIF INTE, V34, P9458
  • [32] LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech
    Zen, Heiga
    Dang, Viet
    Clark, Rob
    Zhang, Yu
    Weiss, Ron J.
    Jia, Ye
    Chen, Zhifeng
    Wu, Yonghui
    [J]. INTERSPEECH 2019, 2019, : 1526 - 1530