Meta Learning for Image Captioning

被引:0
作者
Li, Nannan [1 ]
Chen, Zhenzhong [1 ]
Liu, Shan [2 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan, Hubei, Peoples R China
[2] Tencent Media Lab, Palo Alto, CA USA
来源
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2019年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) has shown its advantages in image captioning by optimizing the non-differentiable metric directly in the reward learning process. However, due to the reward hacking problem in RL, maximizing reward may not lead to better quality of the caption, especially from the aspects of propositional content and distinctiveness. In this work, we propose to use a new learning method, meta learning, to utilize supervision from the ground truth whilst optimizing the reward function in RL. To improve the propositional content and the distinctiveness of the generated captions, the proposed model provides the global optimal solution by taking different gradient steps towards the supervision task and the reinforcement task, simultaneously. Experimental results on MS COCO validate the effectiveness of our approach when compared with the state-of-the-art methods.
引用
收藏
页码:8626 / 8633
页数:8
相关论文
共 34 条
  • [1] [Anonymous], NIPS
  • [2] [Anonymous], 2002, ACL
  • [3] [Anonymous], 2017, ICCV
  • [4] [Anonymous], 2015, CVPR
  • [5] [Anonymous], 2015, CVPR
  • [6] [Anonymous], EMNLP
  • [7] [Anonymous], 2017, ICCV
  • [8] [Anonymous], 2016, ICLR
  • [9] [Anonymous], 2017, NIPS
  • [10] [Anonymous], 2015, ICML