Rethinking the visual cues in audio-visual speaker extraction

被引:7
作者
Li, Junjie [1 ]
Ge, Meng [2 ,4 ]
Pan, Zexu [3 ]
Cao, Rui [1 ]
Wang, Longbiao [1 ]
Dang, Jianwu [1 ]
Zhang, Shiliang
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin Key Lab Cognit Comp & Applicat, Tianjin, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore, Singapore
[3] Natl Univ Singapore, Inst Data Sci, Singapore, Singapore
[4] Shenzhen Res Inst Big Data, Shenzhen, Peoples R China
来源
INTERSPEECH 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Visual cues; speaker extraction; identity; synchronization; decouple; SPEECH;
D O I
10.21437/Interspeech.2023-2545
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The Audio-Visual Speaker Extraction (AVSE) algorithm employs parallel video recording to leverage two visual cues, namely speaker identity and synchronization, to enhance performance compared to audio-only algorithms. However, the visual front-end in AVSE is often derived from a pre-trained model or end-to-end trained, making it unclear which visual cue contributes more to the speaker extraction performance. This raises the question of how to better utilize visual cues. To address this issue, we propose two training strategies that decouple the learning of the two visual cues. Our experimental results demonstrate that both visual cues are useful, with the synchronization cue having a higher impact. We introduce a more explainable model, the Decoupled Audio-Visual Speaker Extraction (DAVSE) model, which leverages both visual cues.
引用
收藏
页码:3754 / 3758
页数:5
相关论文
共 35 条
[1]  
Afouras T., 2018, arXiv preprint arXiv:1809.00496
[2]   Deep Audio-Visual Speech Recognition [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Senior, Andrew ;
Vinyals, Oriol ;
Zisserman, Andrew .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :8717-8727
[3]   ON THE ROLE OF VISUAL CUES IN AUDIOVISUAL SPEECH ENHANCEMENT [J].
Aldeneh, Zakaria ;
Kumar, Anushree Prasanna ;
Theobald, Barry-John ;
Marchi, Erik ;
Kajarekar, Sachin ;
Naik, Devang ;
Abdelaziz, Ahmed Hussen .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :8423-8427
[4]   SOME FURTHER EXPERIMENTS UPON THE RECOGNITION OF SPEECH, WITH ONE AND WITH 2 EARS [J].
CHERRY, EC ;
TAYLOR, WK .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1954, 26 (04) :554-559
[5]   FaceFilter: Audio-visual speech separation using still images [J].
Chung, Soo-Whan ;
Choe, Soyeon ;
Chung, Joon Son ;
Kang, Hong-Goo .
INTERSPEECH 2020, 2020, :3481-3485
[6]  
Elminshawi M., 2022, ARXIV220200733
[7]  
GAO RH, 2021, CVPR, P15490, DOI DOI 10.1109/CVPR46437.2021.01524
[8]   SpEx plus : A Complete Time Domain Speaker Extraction Network [J].
Ge, Meng ;
Xu, Chenglin ;
Wang, Longbiao ;
Chng, Eng Siong ;
Dang, Jianwu ;
Li, Haizhou .
INTERSPEECH 2020, 2020, :1406-1410
[9]   MULTI-STAGE SPEAKER EXTRACTION WITH UTTERANCE AND FRAME-LEVEL REFERENCE SIGNALS [J].
Ge, Meng ;
Xu, Chenglin ;
Wang, Longbiao ;
Chng, Eng Siong ;
Dang, Jianwu ;
Li, Haizhou .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6109-6113
[10]   Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a "Cocktail Party" [J].
Golumbic, Elana Zion ;
Cogan, Gregory B. ;
Schroeder, Charles E. ;
Poeppel, David .
JOURNAL OF NEUROSCIENCE, 2013, 33 (04) :1417-1426