Visually Supervised Speaker Detection and Localization via Microphone Array

被引:4
作者
Berghi, Davide [1 ]
Hilton, Adrian [1 ]
Jackson, Philip J. B. [1 ]
机构
[1] Univ Surrey, CVSSP, Guildford, Surrey, England
来源
IEEE MMSP 2021: 2021 IEEE 23RD INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP) | 2021年
基金
芬兰科学院; “创新英国”项目;
关键词
speaker localization; self-supervised learning; voice activity detection; microphone array beamforming;
D O I
10.1109/MMSP53017.2021.9733678
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Active speaker detection (ASD) is a multi-modal task that aims to identify who, if anyone, is speaking from a set of candidates. Current audio-visual approaches for ASD typically rely on visually pre-extracted face tracks (sequences of consecutive face crops) and the respective monaural audio. However, their recall rate is often low as only the visible faces are included in the set of candidates. Monaural audio may successfully detect the presence of speech activity but fails in localizing the speaker due to the lack of spatial cues. Our solution extends the audio front-end using a microphone array. We train an audio convolutional neural network (CNN) in combination with beamforming techniques to regress the speaker's horizontal position directly in the video frames. We propose to generate weak labels using a pre-trained active speaker detector on pre-extracted face tracks. Our pipeline embraces the "student-teacher" paradigm, where a trained "teacher" network is used to produce pseudo-labels visually. The "student" network is an audio network trained to generate the same results. At inference, the student network can independently localize the speaker in the visual frames directly from the audio input. Experimental results on newly collected data prove that our approach significantly outperforms a variety of other baselines as well as the teacher network itself. It results in an excellent speech activity detector too.
引用
收藏
页数:6
相关论文
共 31 条
[1]   Deep Lip Reading: a comparison of models and an online application [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Zisserman, Andrew .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :3514-3518
[2]  
Alcazar J. L., 2020, CVPR
[3]  
[Anonymous], 2017, Neurocomputing
[4]   Look, Listen and Learn [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :609-617
[5]  
Aytar Y, 2016, ADV NEUR IN, V29
[6]  
Berghi D, 2020, 2020 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES WORKSHOPS (VRW 2020), P667, DOI [10.1109/VRW50115.2020.00-91, 10.1109/VRW50115.2020.00184]
[7]  
Chakravarty P., 2015, ACM INT C MULTIMODAL
[8]  
Chung Joon Son, 2019, Naver at ActivityNet challenge 2019-Task B active speaker detection (AVA)
[9]  
Cutler R, 2000, 2000 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, PROCEEDINGS VOLS I-III, P1589, DOI 10.1109/ICME.2000.871073
[10]   Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation [J].
Ephrat, Ariel ;
Mosseri, Inbar ;
Lang, Oran ;
Dekel, Tali ;
Wilson, Kevin ;
Hassidim, Avinatan ;
Freeman, William T. ;
Rubinstein, Michael .
ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (04)