Hierarchy Parsing for Image Captioning

被引:151
作者
Yao, Ting [1 ]
Pan, Yingwei [1 ]
Li, Yehao [1 ]
Mei, Tao [1 ]
机构
[1] JD AI Res, Beijing, Peoples R China
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00271
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is always well believed that parsing an image into constituent visual patterns would be helpful for understanding and representing an image. Nevertheless, there has not been evidence in support of the idea on describing an image with a natural-language utterance. In this paper, we introduce a new design to model a hierarchy from instance level (segmentation), region level (detection) to the whole image to delve into a thorough image understanding for captioning. Specifically, we present a HIerarchy Parsing (HIP) architecture that novelly integrates hierarchical structure into image encoder. Technically, an image decomposes into a set of regions and some of the regions are resolved into finer ones. Each region then regresses to an instance, i.e., foreground of the region. Such process naturally builds a hierarchal tree. A tree-structured Long Short-Term Memory (Tree-LSTM) network is then employed to interpret the hierarchal structure and enhance all the instance-level, regionlevel and image-level features. Our HIP is appealing in view that it is pluggable to any neural captioning models. Extensive experiments on COCO image captioning dataset demonstrate the superiority of HIP. More remarkably, HIP plus a top-down attention-based LSTM decoder increases CIDEr-D performance from 120.1% to 127.2% on CO-CO Karpathy test split. When further endowing instancelevel and region-level features from HIP with semantic relation learnt through Graph Convolutional Networks (GCN), CIDEr-D is boosted up to 130.6%.
引用
收藏
页码:2621 / 2629
页数:9
相关论文
共 37 条
[1]  
Ahuja N., 2008, CVPR
[2]  
[Anonymous], 2018, EUROPEAN C COMPUTER
[3]  
[Anonymous], 2002, ACL
[4]  
[Anonymous], 2017, ICCV
[5]  
[Anonymous], 2018, ECCV
[6]  
[Anonymous], 2015, CVPR
[7]  
[Anonymous], 2017, CVPR
[8]  
[Anonymous], 2016, CVPR
[9]  
[Anonymous], 2017, IJCV
[10]  
[Anonymous], 2015, ICML