Human Action Recognition using Factorized Spatio-Temporal Convolutional Networks

被引:378
作者
Sun, Lin [1 ,4 ]
Jia, Kui [3 ]
Yeung, Dit-Yan [1 ,2 ]
Shi, Bertram E. [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Hong Kong, Peoples R China
[3] Univ Macau, Fac Sci & Technol, Taipa, Macau, Peoples R China
[4] Lenovo Corp Res, Hong Kong Branch, Hong Kong, Hong Kong, Peoples R China
来源
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2015年
关键词
D O I
10.1109/ICCV.2015.522
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FSTCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FSTCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FSTCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FSTCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.
引用
收藏
页码:4597 / 4605
页数:9
相关论文
共 35 条
[1]  
[Anonymous], 2014, ABS14054506 CORR
[2]  
[Anonymous], 2009, BMVC
[3]  
[Anonymous], IJCV
[4]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI [DOI 10.1109/TPAMI.2012.59, DOI 10.1109/ICEE55646.2022.9827108]
[5]  
[Anonymous], CVPR
[6]  
[Anonymous], NIPS
[7]  
[Anonymous], 2001, Intelligent Signal Processing
[8]  
[Anonymous], TPAMI
[9]  
[Anonymous], 2009, PROC IEEE C COMPUT V
[10]  
[Anonymous], NPIS