Write a Classifier: Predicting Visual Classifiers from Unstructured Text

被引:17
作者
Elhoseiny, Mohamed [1 ]
Elgammal, Ahmed [2 ]
Saleh, Babak [2 ]
机构
[1] Facebook Inc, Facebook AI Res, Menlo Pk, CA 94025 USA
[2] Rutgers State Univ, Comp Sci Dept, 110 Frelinghuysen Rd, Piscataway, NJ 08854 USA
基金
美国国家科学基金会;
关键词
Language and vision; zero shot learning; unstructured text; noisy text; OBJECT; EXAMPLE;
D O I
10.1109/TPAMI.2016.2643667
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two Reproducing Kernel Hilbert Space (RKHS) kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.
引用
收藏
页码:2539 / 2553
页数:15
相关论文
共 70 条
[1]  
Akata Z, 2015, PROC CVPR IEEE, P2927, DOI 10.1109/CVPR.2015.7298911
[2]   Label-Embedding for Attribute-Based Classification [J].
Akata, Zeynep ;
Perronnin, Florent ;
Harchaoui, Zaid ;
Schmid, Cordelia .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :819-826
[3]  
[Anonymous], 2015, P INT C LEARN REPR I
[4]  
[Anonymous], 2011, Proceedings of the Conference on Empirical Methods in Natural Language Processing
[5]  
[Anonymous], 1997, Parallel Optimization: Theory, Algorithms, and Applications
[6]  
Barnard K, 2001, PROC CVPR IEEE, P434
[7]  
Bart E, 2005, PROC CVPR IEEE, P672
[8]  
Belongie S., 2016, ALL ABOUT BIRDS16
[9]   Twin Gaussian Processes for Structured Prediction [J].
Bo, Liefeng ;
Sminchisescu, Cristian .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 87 (1-2) :28-52
[10]  
Bucak Serhat, 2010, NIPS, P325