An Empirical Study of Language CNN for Image Captioning

被引:62
作者
Gu, Jiuxiang [1 ]
Wang, Gang [2 ]
Cai, Jianfei [3 ]
Chen, Tsuhan [3 ]
机构
[1] Nanyang Technol Univ, Interdisciplinary Grad Sch, ROSE Lab, Singapore, Singapore
[2] Alibaba AI Labs, Hangzhou, Zhejiang, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ICCV.2017.138
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with all the previous words and can model the long-range dependencies in history words, which are critical for image captioning. The effectiveness of our approach is validated on two datasets: Flickr30K and MS COCO. Our extensive experimental results show that our method outperforms the vanilla recurrent neural network based language models and is competitive with the state-of-the-art methods.
引用
收藏
页码:1231 / 1240
页数:10
相关论文
共 54 条
  • [1] [Anonymous], 2002, ACL
  • [2] [Anonymous], 2011, ADV NEURAL INFORM PR
  • [3] [Anonymous], 2015, CVPR
  • [4] [Anonymous], 2013, EMNLP
  • [5] [Anonymous], 2015, ICCV
  • [6] [Anonymous], 2015, CVPR
  • [7] [Anonymous], 2011, P 24 CVPR
  • [8] [Anonymous], 2014, ACL
  • [9] [Anonymous], 2015, ICCV
  • [10] [Anonymous], 2016, CVPR