Modeling Context in Referring Expressions

被引:676
作者
Yu, Licheng [1 ]
Poirson, Patrick [1 ]
Yang, Shan [1 ]
Berg, Alexander C. [1 ]
Berg, Tamara L. [1 ]
机构
[1] Univ North Carolina Chapel Hill, Dept Comp Sci, Chapel Hill, NC 27514 USA
来源
COMPUTER VISION - ECCV 2016, PT II | 2016年 / 9906卷
基金
美国国家科学基金会;
关键词
Language; Language and vision; Generation; Referring expression generation;
D O I
10.1007/978-3-319-46475-6_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg (Datasets and toolbox can be downloaded from https://github.com/lichengunc/refer), shows the advantages of our methods for both referring expression generation and comprehension.
引用
收藏
页码:69 / 85
页数:17
相关论文
共 35 条
[1]  
[Anonymous], 2015, ARXIV151107571
[2]  
[Anonymous], 2015, CVPR
[3]  
[Anonymous], ADV NEURAL INFORM PR
[4]  
[Anonymous], 2015, ARXIV150304069
[5]  
[Anonymous], 2015, ARXIV151103745
[6]  
[Anonymous], 2016, CVPR
[7]  
[Anonymous], 2015, ICML
[8]  
[Anonymous], 2015, P CVPR
[9]  
[Anonymous], 2015, ICLR 2015
[10]  
[Anonymous], 2013, P 2013 C N AM CHAPT