DenseCap: Fully Convolutional Localization Networks for Dense Captioning

被引:615
作者
Johnson, Justin [1 ]
Karpathy, Andrej [1 ]
Fei-Fei, Li [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
来源
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2016年
关键词
D O I
10.1109/CVPR.2016.494
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.
引用
收藏
页码:4565 / 4574
页数:10
相关论文
共 53 条
[1]  
[Anonymous], 2016, VISUAL GENOME CONNEC
[2]  
[Anonymous], 2014, Generating sequences with recurrent neural networks
[3]  
[Anonymous], 2015, CVPR
[4]  
[Anonymous], 2015, ICCV
[5]  
[Anonymous], 2015, P 28 INT C NEUR INF
[6]  
[Anonymous], 2011, P 24 CVPR
[7]  
[Anonymous], 2015, CVPR
[8]  
[Anonymous], 2 INT C LEARN REPR
[9]  
[Anonymous], 2014, P WORKSH STAT MACH T
[10]  
[Anonymous], 2014, ECCV