Neural Spatio-Temporal Beamformer for Target Speech Separation

被引:24
作者
Xu, Yong [1 ]
Yu, Meng [1 ]
Zhang, Shi-Xiong [1 ]
Chen, Lianwu [2 ]
Weng, Chao [1 ]
Liu, Jianming [1 ]
Yu, Dong [1 ]
机构
[1] Tencent AI Lab, Bellevue, WA 98004 USA
[2] Tencent AI Lab, Shenzhen, Peoples R China
来源
INTERSPEECH 2020 | 2020年
关键词
target speech separation; multi-tap MVDR; mask-based MVDR; spatio-temporal beamformer; NOISE-REDUCTION; ENHANCEMENT; RECOGNITION; END;
D O I
10.21437/Interspeech.2020-1458
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Purely neural network (NN) based speech separation and enhancement methods, although can achieve good objective scores, inevitably cause nonlinear speech distortions that are harmful for the automatic speech recognition (ASR). On the other hand, the minimum variance distortionless response (MVDR) beamformer with NN-predicted masks, although can significantly reduce speech distortions, has limited noise reduction capability. In this paper, we propose a multi-tap MVDR beamformer with complex-valued masks for speech separation and enhancement. Compared to the state-of-the-art NN-mask based MVDR beamformer, the multi-tap MVDR beamformer exploits the inter-frame correlation in addition to the inter-microphone correlation that is already utilized in prior arts. Further improvements include the replacement of the real-valued masks with the complex-valued masks and the joint training of the complex-mask NN. The evaluation on our multi-modal multi-channel target speech separation and enhancement platform demonstrates that our proposed multi-tap MVDR beamformer improves both the ASR accuracy and the perceptual speech quality against prior arts.
引用
收藏
页码:56 / 60
页数:5
相关论文
共 43 条
  • [1] Afouras T, 2018, INTERSPEECH, P3244
  • [2] [Anonymous], 2008, MICROPHONE ARRAY SIG
  • [3] A comprehensive study of speech separation: spectrogram vs waveform separation[J]. Bahmaninezhad, Fahimeh;Wu, Jian;Gu, Rongzhi;Zhang, Shi-Xiong;Xu, Yong;Yu, Meng;Yu, Dong. INTERSPEECH 2019, 2019
  • [4] Benesty J, 2012, SPRBRIEF ELECT, P1, DOI 10.1007/978-3-642-23250-3
  • [5] Boeddeker C, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P6697, DOI 10.1109/ICASSP.2018.8461669
  • [6] Chen Z, 2018, IEEE W SP LANG TECH, P558, DOI 10.1109/SLT.2018.8639593
  • [7] Chen Z, 2017, INT CONF ACOUST SPEE, P246, DOI 10.1109/ICASSP.2017.7952155
  • [8] Lip Reading in the Wild[J]. Chung, Joon Son;Zisserman, Andrew. COMPUTER VISION - ACCV 2016, PT II, 2017
  • [9] Du J, 2014, INTERSPEECH, P616
  • [10] Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation[J]. Ephrat, Ariel;Mosseri, Inbar;Lang, Oran;Dekel, Tali;Wilson, Kevin;Hassidim, Avinatan;Freeman, William T.;Rubinstein, Michael. ACM TRANSACTIONS ON GRAPHICS, 2018(04)