Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present

被引:46
作者
Chen, Xinpeng [1 ]
Ma, Lin [2 ]
Jiang, Wenhao [2 ]
Yao, Jian [1 ]
Liu, Wei [2 ]
机构
[1] Wuhan Univ, Wuhan, Hubei, Peoples R China
[2] Tencent AI Lab, Bellevue, WA 98004 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2018.00834
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, caption generation with an encoder-decoder framework has been extensively studied and applied in different domains, such as image captioning, code captioning, and so on. In this paper we propose a novel architecture, namely Auto-Reconstructor Network (ARNet), which, coupling with the conventional encoder-decoder framework, works in an end-to-end fashion to generate captions. AR Net aims at reconstructing the previous hidden state with the present one, besides behaving as the input-dependent transition operator. Therefore, ARNet encourages the current hidden state to embed more information from the previous one, which can help regularize the transition dynamics of recurrent neural networks (RNNs). Extensive experimental results show that our proposed ARNet boosts the performance over the existing encoder-decoder models on both image captioning and source code captioning tasks. Additionally, ARNet remarkably reduces the discrepancy between training and inference processes for caption generation. Furthermore, the performance on permuted sequential MNIST demonstrates that ARNet can effectively regularize RNN, especially on modeling long-term dependencies. Our code is available at: https//github.com/chenxinpeng/ARNet.
引用
收藏
页码:7995 / 8003
页数:9
相关论文
共 38 条
  • [1] [Anonymous], 2002, ACL
  • [2] [Anonymous], 2004, TSBO
  • [3] [Anonymous], 2014, ECCV
  • [4] [Anonymous], 2015, CVPR
  • [5] [Anonymous], 2015, PROC CVPR IEEE, DOI 10.1109/CVPR.2015.7299173
  • [6] [Anonymous], 2015, INT C LEARN REPR ICL
  • [7] [Anonymous], ARXIV151105101
  • [8] [Anonymous], 2016, NIPS
  • [9] [Anonymous], 2015, ICML
  • [10] [Anonymous], 2014, arXiv