Constrained LSTM and Residual Attention for Image Captioning

被引:29
作者
Yang, Liang [1 ]
Hu, Haifeng [1 ]
Xing, Songlong [1 ]
Lu, Xinlong [1 ]
机构
[1] Sun Yat Sen Univ, Guangzhou 510006, Guangdong, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Image captioning; visual attention; visual skeleton; object detection; LSTM;
D O I
10.1145/3386725
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual structure and syntactic structure are essential in images and texts, respectively. Visual structure depicts both entities in an image and their interactions, whereas syntactic structure in texts can reflect the part-of-speech constraints between adjacent words. Most existing methods either use visual global representation to guide the language model or generate captions without considering the relationships of different entities or adjacent words. Thus, their language models lack relevance in both visual and syntactic structure. To solve this problem, we propose a model that aligns the language model to certain visual structure and also constrains it with a specific part-of-speech template. In addition, most methods exploit the latent relationship betweenwords in a sentence and pre-extracted visual regions in an image yet ignore the effects of unextracted regions on predicted words. We develop a residual attention mechanism to simultaneously focus on the preextracted visual objects and unextracted regions in an image. Residual attention is capable of capturing precise regions of an image corresponding to the predicted words considering both the effects of visual objects and unextracted regions. The effectiveness of our entire framework and each proposed module are verified on two classical datasets: MSCOCO and Flickr30k. Our framework is on par with or even better than the stateof-the-art methods and achieves superior performance on COCO captioning Leaderboard.
引用
收藏
页数:18
相关论文
共 45 条
[21]  
Kuznetsova Polina, 2012, P M ASS COMP LING LO
[22]  
Li Siming, 2011, P 15 C COMP NAT LANG
[23]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[24]   Towards Robust Neural Networks via Random Self-ensemble [J].
Liu, Xuanqing ;
Cheng, Minhao ;
Zhang, Huan ;
Hsieh, Cho-Jui .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :381-397
[25]   Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning [J].
Lu, Jiasen ;
Xiong, Caiming ;
Parikh, Devi ;
Socher, Richard .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3242-3250
[26]   Roles of Wnt/β-catenin signaling in the gastric cancer stem cells proliferation and salinomycin treatment [J].
Mao, J. ;
Fan, S. ;
Ma, W. ;
Fan, P. ;
Wang, B. ;
Zhang, J. ;
Wang, H. ;
Tang, B. ;
Zhang, Q. ;
Yu, X. ;
Wang, L. ;
Song, B. ;
Li, L. .
CELL DEATH & DISEASE, 2014, 5 :e1039-e1039
[27]  
Mitchell M., 2012, P C EUR CHAPT ASS CO
[28]   Look Back and Predict Forward in Image Captioning [J].
Qin, Yu ;
Du, Jiajun ;
Zhang, Yonghua ;
Lu, Hongtao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8359-8367
[29]   Deep Semantic Hashing with Generative Adversarial Networks [J].
Qiu, Zhaofan ;
Pan, Yingwei ;
Yao, Ting ;
Mei, Tao .
SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, :225-234
[30]  
Redmon J., 2016, Proceedings of the IEEE conference on computer vision and pattern recognition, P779, DOI [DOI 10.1109/CVPR.2016.91, 10.1109/CVPR.2016.91]