Multimodal Learning in Loosely-organized Web Images

被引:5
作者
Duan, Kun [1 ]
Crandall, David J. [1 ]
Batra, Dhruv [2 ]
机构
[1] Indiana Univ, Bloomington, IN 47405 USA
[2] Virginia Tech, Blacksburg, VA USA
来源
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2014年
关键词
D O I
10.1109/CVPR.2014.316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Photo-sharing websites have become very popular in the last few years, leading to huge collections of online images. In addition to image data, these websites collect a variety of multimodal metadata about photos including text tags, captions, GPS coordinates, camera metadata, user profiles, etc. However, this metadata is not well constrained and is often noisy, sparse, or missing altogether. In this paper, we propose a framework to model these "loosely organized" multimodal datasets, and show how to perform loosely-supervised learning using a novel latent Conditional Random Field framework. We learn parameters of the LCRF automatically from a small set of validation data, using Information Theoretic Metric Learning (ITML) to learn distance functions and a structural SVM formulation to learn the potential functions. We apply our framework on four datasets of images from Flickr, evaluating both qualitatively and quantitatively against several baselines.
引用
收藏
页码:2465 / 2472
页数:8
相关论文
共 23 条
[1]  
Amigo Enrique., 2009, Information retrieval, V12
[2]  
[Anonymous], 2009, ICCV
[3]  
[Anonymous], 2012, ECCV
[4]  
[Anonymous], PAMI
[5]  
[Anonymous], 2010, CVPR
[6]  
[Anonymous], 2012, ECCV
[7]  
[Anonymous], 2007, ICML
[8]  
[Anonymous], NIPS
[9]  
BASU S, 2004, KDD
[10]  
Bekkerman Ron, 2007, CVPR