Describing Video With Attention-Based Bidirectional LSTM

被引:182
作者
Bin, Yi [1 ,2 ]
Yang, Yang [1 ,2 ]
Shen, Fumin [1 ,2 ]
Xie, Ning [1 ,2 ]
Shen, Heng Tao [1 ,2 ]
Li, Xuelong [3 ]
机构
[1] Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Sichuan, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Sichuan, Peoples R China
[3] Chinese Acad Sci, Ctr Opt Imagery Anal & Learning, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Xian 710119, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Bidirectional long-short term memory (BiLSTM); temporal attention; video captioning;
D O I
10.1109/TCYB.2018.2831447
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning has been attracting broad research attention in the multimedia community. However, most existing approaches heavily rely on static visual information or partially capture the local temporal knowledge (e.g., within 16 frames), thus hardly describing motions accurately from a global view. In this paper, we propose a novel video captioning framework, which integrates bidirectional long-short term memory (BiLSTM) and a soft attention mechanism to generate better global representations for videos as well as enhance the recognition of lasting motions in videos. To generate video captions, we exploit another long-short term memory as a decoder to fully explore global contextual information. The benefits of our proposed method are two fold: 1) the BiLSTM structure comprehensively preserves global temporal and visual information and 2) the soft attention mechanism enables a language decoder to recognize and focus on principle targets from the complex content. We verify the effectiveness of our proposed video captioning framework on two widely used benchmarks, that is, microsoft video description corpus and MSR-video to text, and the experimental results demonstrate the superiority of the proposed approach compared to several state-of-the-art methods.
引用
收藏
页码:2631 / 2641
页数:11
相关论文
共 58 条
  • [1] [Anonymous], 2011, P ANN M ASS COMP LIN
  • [2] [Anonymous], P 3 INT C LEARNING R
  • [3] [Anonymous], ARXIV160502688
  • [4] [Anonymous], PROC CVPR IEEE
  • [5] [Anonymous], NAACL HLT 2015
  • [6] [Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
  • [7] [Anonymous], N PROC INT CONF MACH
  • [8] [Anonymous], P WORKSH IEEE INT C
  • [9] [Anonymous], ADADELTA: An Adaptive Learning Rate Method
  • [10] Bahdanau D., 2015, P 3 INT C LEARN REPR