Look, Listen and Learn

被引:607
作者
Arandjelovic, Relja [1 ]
Zisserman, Andrew [1 ,2 ]
机构
[1] DeepMind, London, England
[2] Univ Oxford, Dept Engn Sci, VGG, Oxford, England
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
关键词
D O I
10.1109/ICCV.2017.73
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself - the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.
引用
收藏
页码:609 / 617
页数:9
相关论文
共 36 条
[1]  
[Anonymous], 2015, P ICCV
[2]  
[Anonymous], 2015, P ICCV
[3]  
[Anonymous], 2015, P ICCV
[4]  
[Anonymous], 2013, P NIPS
[5]  
[Anonymous], 2016, NIPS
[6]  
[Anonymous], NEUROIMAGE
[7]  
[Anonymous], P ECCV
[8]  
[Anonymous], 2016, NIPS
[9]  
[Anonymous], 2013, IEEE WORK APPL SIG
[10]  
[Anonymous], P ECCV