What are you talking about? Text-to-Image Coreference

被引:87
作者
Kong, Chen [1 ]
Lin, Dahua [3 ]
Bansal, Mohit [3 ]
Urtasun, Raquel [2 ,3 ]
Fidler, Sanja [2 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Univ Toronto, Toronto, ON M5S 1A1, Canada
[3] TTI Chicago, Chicago, IL USA
来源
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2014年
关键词
D O I
10.1109/CVPR.2014.455
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we exploit natural sentential descriptions of RGB-D scenes in order to improve 3D semantic parsing. Importantly, in doing so, we reason about which particular object each noun/pronoun is referring to in the image. This allows us to utilize visual information in order to disambiguate the so-called coreference resolution problem that arises in text. Towards this goal, we propose a structure prediction model that exploits potentials computed from text and RGB-D imagery to reason about the class of the 3D objects, the scene type, as well as to align the nouns/pronouns with the referred visual objects. We demonstrate the effectiveness of our approach on the challenging NYU-RGBD v2 dataset, which we enrich with natural lingual descriptions. We show that our approach significantly improves 3D detection and scene classification accuracy, and is able to reliably estimate the text-to-image alignment. Furthermore, by using textual and visual information, we are also able to successfully deal with coreference in text, improving upon the state-of-the-art Stanford coreference system [15].
引用
收藏
页码:3558 / 3565
页数:8
相关论文
共 34 条
  • [1] [Anonymous], 2012, CVPR
  • [2] [Anonymous], 2011, ADV NEURAL INFORM PR
  • [3] [Anonymous], 2011, P 24 CVPR
  • [4] [Anonymous], 2003, JMLR
  • [5] [Anonymous], 2012, ECCV
  • [6] [Anonymous], 2010, CVPR
  • [7] [Anonymous], 2009, CVPR
  • [8] [Anonymous], 2013, ACL
  • [9] [Anonymous], CVPR
  • [10] [Anonymous], NIPS