AUDIO-VISUAL MULTI-CHANNEL SPEECH SEPARATION, DEREVERBERATION AND RECOGNITION

被引:4
作者
Li, Guinan [1 ]
Yu, Jianwei [1 ,2 ]
Deng, Jiajun [1 ]
Liu, Xunying [1 ]
Meng, Helen [1 ]
机构
[1] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[2] Tencent AI Lab, Bellevue, WA USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
Audio-visual; Speech separation; dereverberation and recognition;
D O I
10.1109/ICASSP43922.2022.9747237
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Despite the rapid advance of automatic speech recognition (ASR) technologies, accurate recognition of cocktail party speech characterised by the interference from overlapping speakers, background noise and room reverberation remains a highly challenging task to date. Motivated by the invariance of visual modality to acoustic signal corruption, audio-visual speech enhancement techniques have been developed, although predominantly targeting overlapping speech separation and recognition tasks. In this paper, an audiovisual multi-channel speech separation, dereverberation and recognition approach featuring a full incorporation of visual information into all three stages of the system is proposed. The advantage of the additional visual modality over using audio only is demonstrated on two neural dereverberation approaches based on DNN-WPE and spectral mapping respectively. The learning cost function mismatch between the separation and dereverberation models and their integration with the back-end recognition system is minimised using fine-tuning on the MSE and LF-MMI criteria. Experiments conducted on the LRS2 dataset suggest that the proposed audio-visual multi-channel speech separation, dereverberation and recognition system outperforms the baseline audio-visual multi-channel speech separation and recognition system containing no dereverberation module by a statistically significant word error rate (WER) reduction of 2.06 % absolute (8.77 % relative).
引用
收藏
页码:6042 / 6046
页数:5
相关论文
共 50 条
  • [41] CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command Recognition
    Dai, Wenliang
    Cahyawijaya, Samuel
    Yu, Tiezheng
    Barezi, Elham J.
    Xu, Peng
    Yiu, Cheuk Tung Shadow
    Frieske, Rita
    Lovenia, Holy
    Winata, Genta Indra
    Chen, Qifeng
    Ma, Xiaojuan
    Shi, Bertram E.
    Fung, Pascale
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6786 - 6793
  • [42] Audio-Visual Speech Recognition in MISP2021 Challenge: Dataset Release and Deep Analysis
    Chen, Hang
    Du, Jun
    Dai, Yusheng
    Lee, Chin-Hui
    Siniscalchi, Sabato Marco
    Watanabe, Shinji
    Scharenborg, Odette
    Chen, Jingdong
    Yin, Bao-Cai
    Pan, Jia
    INTERSPEECH 2022, 2022, : 1766 - 1770
  • [43] Deep Audio-visual System for Closed-set Word-level Speech Recognition
    Yuan, Yougen
    Tang, Wei
    Fan, Minhao
    Cao, Yue
    Zhang, Peng
    Xie, Lei
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 540 - 545
  • [44] Cortical operational synchrony during audio-visual speech integration
    Fingelkurts, AA
    Fingelkurts, AA
    Krause, CM
    Möttönen, R
    Sams, M
    BRAIN AND LANGUAGE, 2003, 85 (02) : 297 - 312
  • [45] Multi-Channel Multi-Frame ADL-MVDR for Target Speech Separation
    Zhang, Zhuohuang
    Xu, Yong
    Yu, Meng
    Zhang, Shi-Xiong
    Chen, Lianwu
    Williamson, Donald S.
    Yu, Dong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3526 - 3540
  • [46] MULTI-BAND PIT AND MODEL INTEGRATION FOR IMPROVED MULTI-CHANNEL SPEECH SEPARATION
    Chen, Lianwu
    Yu, Meng
    Su, Dan
    Yu, Dong
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 705 - 709
  • [47] Audio-Visual Fusion using Multiscale Temporal Convolutional Attention for Time-Domain Speech Separation
    Liu, Debang
    Zhang, Tianqi
    Christensen, Mads Graesboll
    Wei, Ying
    An, Zeliang
    INTERSPEECH 2023, 2023, : 3694 - 3698
  • [48] Research on DCNN-U-Net speech separation method based on Audio-Visual multimodal fusion
    Lan, Chaofeng
    Guo, Rui
    Zhang, Lei
    Wang, Shunbo
    Zhang, Meng
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (04)
  • [49] EFFICIENT INTEGRATION OF FIXED BEAMFORMERS AND SPEECH SEPARATION NETWORKS FOR MULTI-CHANNEL FAR-FIELD SPEECH SEPARATION
    Chen, Zhuo
    Yoshioka, Takuya
    Xiao, Xiong
    Li, Jinyu
    Seltzer, Michael L.
    Gong, Yifan
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5384 - 5388
  • [50] MIMO-SPEECH: END-TO-END MULTI-CHANNEL MULTI-SPEAKER SPEECH RECOGNITION
    Chang, Xuankai
    Zhang, Wangyou
    Qian, Yanmin
    Le Roux, Jonathan
    Watanabe, Shinji
    2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, : 237 - 244