Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation

被引:31
作者
Lee, Jiyoung [1 ]
Chung, Soo-Whan [1 ,2 ]
Kim, Sunok [3 ]
Kang, Hong-Goo [1 ]
Sohn, Kwanghoon [1 ]
机构
[1] Yonsei Univ, Dept Elect & Elect Engn, Seoul, South Korea
[2] Naver Corp, Seongnam Si, Gyeonggi Provin, South Korea
[3] Korea Aerosp Univ, Goyang Si, Gyeonggi Do, South Korea
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
新加坡国家研究基金会;
关键词
DEEP NEURAL-NETWORKS; SPEAKER;
D O I
10.1109/CVPR46437.2021.00139
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we address the problem of separating individual speech signals from videos using audio-visual neural processing. Most conventional approaches utilize frame-wise matching criteria to extract shared information between co-occurring audio and video. Thus, their performance heavily depends on the accuracy of audio-visual synchronization and the effectiveness of their representations. To overcome the frame discontinuity problem between two modalities due to transmission delay mismatch or jitter, we propose a cross-modal affinity network (CaffNet) that learns global correspondence as well as locally-varying affinities between audio and visual streams. Given that the global term provides stability over a temporal sequence at the utterance-level, this resolves the label permutation problem characterized by inconsistent assignments. By extending the proposed cross-modal affinity on the complex network, we further improve the separation performance in the complex spectral domain. Experimental results verify that the proposed methods outperform conventional ones on various datasets, demonstrating their advantages in real-world scenarios.
引用
收藏
页码:1336 / 1345
页数:10
相关论文
共 57 条
[1]  
Afouras T., 2018, arXiv preprint arXiv:1809.00496
[2]   Deep Audio-Visual Speech Recognition [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Senior, Andrew ;
Vinyals, Oriol ;
Zisserman, Andrew .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :8717-8727
[3]   Self-supervised Learning of Audio-Visual Objects from Video [J].
Afouras, Triantafyllos ;
Owens, Andrew ;
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ECCV 2020, PT XVIII, 2020, 12363 :208-224
[4]   My lips are concealed: Audio-visual speech enhancement through obstructions [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Zisserman, Andrew .
INTERSPEECH 2019, 2019, :4295-4299
[5]  
Afouras T, 2018, INTERSPEECH, P3244
[6]  
[Anonymous], 2018, COMP VIS ECCV 2018 W, DOI DOI 10.1163/9789004385580002
[7]  
[Anonymous], 2007, CVPR
[8]  
[Anonymous], 2016, INT CONF ACOUST SPEE
[9]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
[10]  
Bregman A. S., 1994, Auditory Scene Analysis: The Perceptual Organization of Sound