Weakly Supervised Representation Learning for Audio-Visual Scene Analysis

被引:13
作者
Parekh, Sanjeel [1 ]
Essid, Slim [1 ]
Ozerov, Alexey [2 ]
Ngoc Q K Duong [2 ]
Perez, Patrick [3 ]
Richard, Gael [1 ]
机构
[1] Telecom Paris, F-91120 Paris, France
[2] InterDigital, F-35510 Cesson Sevigne, France
[3] Valco Ai, F-75008 Paris, France
关键词
Visualization; Task analysis; Videos; Proposals; Feature extraction; Event detection; Source separation; Multimodal classification; sound event detection; object localization; multiple instance learning; deep learning; audio-visual fusion; SOUND EVENT DETECTION; OBJECT; IDENTIFICATION; CLASSIFICATION; CATEGORIZATION; SEGMENTATION; RECOGNITION;
D O I
10.1109/TASLP.2019.2957889
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Audio-visual (AV) representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. Specifically, we develop methods that identify events and localize corresponding AV cues in unconstrained videos. Importantly, this is done using weak labels where only video-level event labels are known without any information about their location in time. We show that the learnt representations are useful for performing several tasks such as event/object classification, audio event detection, audio source separation and visual object localization. An important feature of our method is its capacity to learn from unsynchronized audio-visual events. We also demonstrate our framework's ability to separate out the audio source of interest through a novel use of nonnegative matrix factorization. State-of-the-art classification results, with a F1-score of 65.0, are achieved on DCASE 2017 smart cars challenge data with promising generalization to diverse object types such as musical instruments. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.
引用
收藏
页码:416 / 428
页数:13
相关论文
共 73 条
[1]  
Abu-El-Haija S., 2016, ARXIV160908675
[2]  
Adavanne S, 2017, INT CONF ACOUST SPEE, P771, DOI 10.1109/ICASSP.2017.7952260
[3]  
Andrew G., 2013, P INT C MACH LEARN, P1247
[4]  
[Anonymous], 2016, INT CONF CONTEMP
[5]  
[Anonymous], 2010, ASIA PACIF MICROWAVE
[6]  
[Anonymous], 2019, COMPUT AIDED CIV INF, DOI DOI 10.1111/MICE.12390
[7]  
Arandjelovic R., 2018, LECT NOTES COMPUT SC, P435, DOI DOI 10.1007/978-3-030-01246-5_27
[8]   Look, Listen and Learn [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :609-617
[9]  
Bilen H., 2014, P BRIT MACH VIS C, P1
[10]   Weakly Supervised Deep Detection Networks [J].
Bilen, Hakan ;
Vedaldi, Andrea .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2846-2854