SuperFormer: Enhanced Multi-Speaker Speech Separation Network Combining Channel and Spatial Adaptability

被引:0
|
作者
Jiang, Yanji [1 ,2 ]
Qiu, Youli [1 ]
Shen, Xueli [1 ]
Sun, Chuan [2 ,3 ]
Liu, Haitao [2 ]
机构
[1] Liaoning Tech Univ, Sch Software, Huludao 125105, Peoples R China
[2] Tsinghua Univ, Suzhou Automot Res Inst, Suzhou 215100, Peoples R China
[3] Hong Kong Polytech Univ, Dept Civil & Environm Engn, Hung Hom, Kowloon, Hong Kong, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 15期
基金
中国国家自然科学基金;
关键词
multi-speaker separation; speech separation; transformer; speaker enhancement; adaptive network;
D O I
10.3390/app12157650
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Speech separation is a hot topic in multi-speaker speech recognition. The long-term autocorrelation of speech signal sequences is an essential task for speech separation. The keys are effective intra-autocorrelation learning for the speaker's speech, modelling the local (intra-blocks) and global (intra- and inter- blocks) dependence features of the speech sequence, with the real-time separation of as few parameters as possible. In this paper, the local and global dependence features of speech sequence information are extracted by utilizing different transformer structures. A forward adaptive module of channel and space autocorrelation is proposed to give the separated model good channel adaptability (channel adaptive modeling) and space adaptability (space adaptive modeling). In addition, at the back end of the separation model, a speaker enhancement module is considered to further enhance or suppress the speech of different speakers by taking advantage of the mutual suppression characteristics of each source signal. Experiments show that the scale-invariant signal-to-noise ratio improvement (SI-SNRi) of the proposed separation network on the public corpus WSJ0-2mix achieves better separation performance compared with the baseline models. The proposed method can provide a solution for speech separation and speech recognition in multi-speaker scenarios.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Training Multi-Speaker Neural Text-to-Speech Systems using Speaker-Imbalanced Speech Corpora
    Luong, Hieu-Thi
    Wang, Xin
    Yamagishi, Junichi
    Nishizawa, Nobuyuki
    INTERSPEECH 2019, 2019, : 1303 - 1307
  • [42] Deep Gaussian process based multi-speaker speech synthesis with latent speaker representation
    Mitsui, Kentaro
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    SPEECH COMMUNICATION, 2021, 132 : 132 - 145
  • [43] DNN based multi-speaker speech synthesis with temporal auxiliary speaker ID embedding
    Lee, Junmo
    Song, Kwangsub
    Noh, Kyoungjin
    Park, Tae-Jun
    Chang, Joon-Hyuk
    2019 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2019, : 61 - 64
  • [44] CLeLfPC: a Large Open Multi-Speaker Corpus of French Cued Speech
    Bigi, Brigitte
    Zimmermann, Maryvonne
    Andre, Carine
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 987 - 994
  • [45] Phoneme Duration Modeling Using Speech Rhythm-Based Speaker Embeddings for Multi-Speaker Speech Synthesis
    Fujita, Kenichi
    Ando, Atsushi
    Ijima, Yusuke
    INTERSPEECH 2021, 2021, : 3141 - 3145
  • [46] Integrating Spectral and Spatial Features for Multi-Channel Speaker Separation
    Wang, Zhong-Qiu
    Wang, DeLiang
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 2718 - 2722
  • [47] A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin
    Yu, Jun
    Su, Rongfeng
    Wang, Lan
    Zhou, Wenpeng
    2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2016,
  • [48] Multi-Speaker ASR Combining Non-Autoregressive Conformer CTC and Conditional Speaker Chain
    Guo, Pengcheng
    Chang, Xuankai
    Watanabe, Shinji
    Xie, Lei
    INTERSPEECH 2021, 2021, : 3720 - 3724
  • [49] Normalization Driven Zero-shot Multi-Speaker Speech Synthesis
    Kumar, Neeraj
    Goel, Srishti
    Narang, Ankur
    Lall, Brejesh
    INTERSPEECH 2021, 2021, : 1354 - 1358
  • [50] A Purely End-to-end System for Multi-speaker Speech Recognition
    Seki, Hiroshi
    Hori, Takaaki
    Watanabe, Shinji
    Le Roux, Jonathan
    Hershey, John R.
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 2620 - 2630