Learning Deep Representations of Fine-Grained Visual Descriptions

被引:530
作者
Reed, Scott [1 ]
Akata, Zeynep [2 ]
Lee, Honglak [1 ]
Schiele, Bernt [2 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Max Planck Inst Informat, Saarbrucken, Germany
来源
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2016年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR.2016.13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch; i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.
引用
收藏
页码:49 / 58
页数:10
相关论文
共 53 条
  • [1] Akata Z., 2015, CVPR
  • [2] [Anonymous], 2011, ADV NEURAL INFORM PR
  • [3] [Anonymous], 2015, CVPR
  • [4] [Anonymous], 2014, T ASSOC COMPUT LING
  • [5] [Anonymous], 2013, P NIPS
  • [6] [Anonymous], 2011, P 24 CVPR
  • [7] [Anonymous], 2015, CVPR
  • [8] [Anonymous], FACEBOOKS AI CAN CAP
  • [9] [Anonymous], 2015, ICCV
  • [10] [Anonymous], 2014, ICML