Long-Term Recurrent Convolutional Networks for Visual Recognition and Description

被引:858
作者
Donahue, Jeff [1 ]
Hendricks, Lisa Anne [1 ]
Rohrbach, Marcus [1 ,2 ]
Venugopalan, Subhashini [3 ]
Guadarrama, Sergio [1 ]
Saenko, Kate [4 ]
Darrell, Trevor [1 ,2 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] Int Comp Sci Inst, Berkeley, CA 94720 USA
[3] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
[4] Univ Massachusetts Lowell, Dept Comp Sci, Lowell, MA 01852 USA
关键词
Computer vision; convolutional nets; deep learning; transfer learning;
D O I
10.1109/TPAMI.2016.2599174
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.
引用
收藏
页码:677 / 691
页数:15
相关论文
共 73 条
  • [1] [Anonymous], CORR
  • [2] [Anonymous], 2014, P 8 WORKSH SYNT SEM
  • [3] [Anonymous], 2014, CORR
  • [4] [Anonymous], 2014, ADV NEURAL INFORM PR
  • [5] [Anonymous], 2014, T ASSOC COMPUT LING
  • [6] [Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
  • [7] [Anonymous], INT C LEARN REPR SAN
  • [8] [Anonymous], N AM CHAPTER ASS COM
  • [9] [Anonymous], 1989, NEURAL COMPUTATION
  • [10] [Anonymous], 2015, CORR