Self-critical Sequence Training for Image Captioning

被引:1311
作者
Rennie, Steven J. [1 ]
Marcheret, Etienne [1 ]
Mroueh, Youssef [1 ]
Ross, Jerret [1 ]
Goel, Vaibhava [1 ]
机构
[1] IBM Corp, TJ Watson Res Ctr, Watson Multimodal Algorithms & Engines Grp, New York, NY 10022 USA
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
关键词
D O I
10.1109/CVPR.2017.131
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a "baseline" to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.
引用
收藏
页码:1179 / 1195
页数:17
相关论文
共 23 条
[1]  
[Anonymous], PAMI
[2]  
Bahdanau D., 2016, An Actor-Critic Algorithm for Sequence Prediction
[3]  
Banerjee S., 2005, P ACL WORKSH INTR EX, P65
[4]  
Bengio S, 2015, ADV NEURAL INFORM PR, P1171, DOI DOI 10.5555/2969239.2969370
[5]  
Cho K., 2014, ARXIV, P103, DOI 10.3115/v1/w14-4012
[6]  
He K., 2016, PROC CVPR IEEE, P630, DOI [10.1007/978-3-319-46493-0_38, DOI 10.1007/978-3-319-46493-0_38, DOI 10.1109/CVPR.2016.90]
[7]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[8]  
Jinchoi Myung, CONTEXT MODELS OUT O
[9]  
Karpathy A, 2015, PROC CVPR IEEE, P3128, DOI 10.1109/CVPR.2015.7298932
[10]  
Kingma Diederik P., 2014, arXiv