Learning Aligned Cross-Modal Representations from Weakly Aligned Data

被引:93
作者
Castrejon, Lluis [1 ]
Aytar, Yusuf [2 ]
Vondrick, Carl [2 ]
Pirsiavash, Hamed [3 ]
Torralba, Antonio [2 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] MIT CSAIL, Cambridge, MA USA
[3] Univ Maryland Baltimore Cty, Baltimore, MD 21228 USA
来源
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2016年
关键词
D O I
10.1109/CVPR.2016.321
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.
引用
收藏
页码:2940 / 2949
页数:10
相关论文
共 46 条
[1]  
[Anonymous], 2013, P NIPS
[2]  
[Anonymous], ZERO SHOT LEARNING C
[3]  
[Anonymous], ICCV
[4]  
[Anonymous], 2013, 31 INT C MACH LEARN
[5]  
[Anonymous], 2009, CVPR
[6]  
[Anonymous], ICM
[7]  
[Anonymous], 2013, NeurIPS
[8]  
[Anonymous], 2014, UNIFYING VISUAL SEMA
[9]  
[Anonymous], 2010, CVPR
[10]  
[Anonymous], 2011, TVCG