CAM-RNN: Co-Attention Model Based RNN for Video Captioning

被引:105
作者
Zhao, Bin [1 ,2 ]
Li, Xuelong [1 ,2 ]
Lu, Xiaoqiang [3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Ctr OPT IMagery Anal & Learning OPTIMAL, Xian 710072, Shaanxi, Peoples R China
[3] Chinese Acad Sci, Key Lab Spectral Imaging Technol CAS, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Attention model; video captioning; recurrent neural network;
D O I
10.1109/TIP.2019.2916757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video captioning is a technique that bridges vision and language together, for which both visual information and text information are quite important. Typical approaches are based on the recurrent neural network (RNN), where the video caption is generated word by word, and the current word is predicted based on the visual content and previously generated words. However, in the prediction of the current word, there is much uncorrelated visual content, and some of the previously generated words provide little information, which may cause interference in generating a correct caption. Based on this point, we attempt to exploit the visual and text features that are most correlated with the caption. In this paper, a co-attention model based recurrent neural network (CAM-RNN) is proposed, where the CAM is utilized to encode the visual and text features, and the RNN works as the decoder to generate the video caption. Specifically, the CAM is composed of a visual attention module, a text attention module, and a balancing gate. During the generation procedure, the visual attention module is able to adaptively attend to the salient regions in each frame and the frames most correlated with the caption. The text attention module can automatically focus on the most relevant previously generated words or phrases. Moreover, between the two attention modules, a balancing gate is designed to regulate the influence of visual features and text features when generating the caption. In practice, the extensive experiments are conducted on four popular datasets, including MSVD, Charades, MSR-VTT, and MPII-MD, which have demonstrated the effectiveness of the proposed approach.
引用
收藏
页码:5552 / 5565
页数:14
相关论文
共 66 条
  • [1] [Anonymous], P 3 INT C LEARNING R
  • [2] [Anonymous], NAACL HLT 2015
  • [3] [Anonymous], CORR
  • [4] [Anonymous], ADV NEURAL INFORM PR
  • [5] [Anonymous], 2016, CORR
  • [6] [Anonymous], CORR
  • [7] [Anonymous], 2017, CORR
  • [8] [Anonymous], 2014, 3 INT C LEARN REPR
  • [9] [Anonymous], 2015, COMPUTER SCI
  • [10] [Anonymous], 2014, CORR