Temporal Pyramid Pooling-Based Convolutional Neural Network for Action Recognition

被引:97
作者
Wang, Peng [1 ]
Cao, Yuanzhouhan [2 ]
Shen, Chunhua [2 ,3 ]
Liu, Lingqiao [2 ]
Shen, Heng Tao [1 ]
机构
[1] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4072, Australia
[2] Univ Adelaide, Sch Comp Sci, Adelaide, SA 5005, Australia
[3] Australian Ctr Robot Vis, Brisbane, Qld 4000, Australia
基金
澳大利亚研究理事会;
关键词
Action recognition; convolutional neural network (CNN); temporal pyramid pooling;
D O I
10.1109/TCSVT.2016.2576761
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Encouraged by the success of convolutional neural networks (CNNs) in image classification, recently much effort is spent on applying the CNNs to the video-based action recognition problems. One challenge is that a video contains a varying number of frames, which is incompatible to the standard input format of the CNNs. Existing methods handle this issue either by directly sampling a fixed number of frames or bypassing this issue by introducing a 3D convolutional layer, which conducts convolution in spatial-temporal domain. In this paper, we propose a novel network structure, which allows an arbitrary number of frames as the network input. The key to our solution is to introduce a module consisting of an encoding layer and a temporal pyramid pooling layer. The encoding layer maps the activation from the previous layers to a feature vector suitable for pooling, whereas the temporal pyramid pooling layer converts multiple frame-level activations into a fixed-length video-level representation. In addition, we adopt a feature concatenation layer that combines the appearance and motion information. Compared with the frame sampling strategy, our method avoids the risk of missing any important frames. Compared with the 3D convolutional method, which requires a huge video data set for network training, our model can be learned on a small target data set because we can leverage the off-the-shelf image-level CNN for model parameter initialization. Experiments on three challenging data sets, Hollywood2, HMDB51, and UCF101 demonstrate the effectiveness of the proposed network.
引用
收藏
页码:2613 / 2622
页数:10
相关论文
共 38 条
[1]  
[Anonymous], BAG VISUAL WORDS FUS
[2]   Space-Time Robust Video Representation for Action Recognition [J].
Ballas, Nicolas ;
Yang, Yi ;
Lan, Zhen-zhong ;
Delezoide, Betrand ;
Preteux, Francoise ;
Hauptmann, Alex .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :2704-2711
[3]   Multi-View Super Vector for Action Recognition [J].
Cai, Zhuowei ;
Wang, Limin ;
Peng, Xiaojiang ;
Qiao, Yu .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :596-603
[4]   The devil is in the details: an evaluation of recent feature encoding methods [J].
Chatfield, Ken ;
Lempitsky, Victor ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
[5]   Human detection using oriented histograms of flow and appearance [J].
Dalal, Navneet ;
Triggs, Bill ;
Schmid, Cordelia .
COMPUTER VISION - ECCV 2006, PT 2, PROCEEDINGS, 2006, 3952 :428-441
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Donahue J, 2015, PROC CVPR IEEE, P2625, DOI 10.1109/CVPR.2015.7298878
[8]  
Fan RE, 2008, J MACH LEARN RES, V9, P1871
[9]  
Heng Wang, 2011, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P3169, DOI 10.1109/CVPR.2011.5995407
[10]   Better exploiting motion for better action recognition [J].
Jain, Mihir ;
Jegou, Herve ;
Bouthemy, Patrick .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :2555-2562