Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition

被引:454
作者
Hara, Kensho [1 ]
Kataoka, Hirokatsu [1 ]
Satoh, Yutaka [1 ]
机构
[1] Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki, Japan
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017) | 2017年
关键词
D O I
10.1109/ICCVW.2017.373
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks with spatio-temporal 3D kernels (3D CNNs) have an ability to directly extract spatio-temporal features from videos for action recognition. Although the 3D kernels tend to overfit because of a large number of their parameters, the 3D CNNs are greatly improved by using recent huge video databases. However, the architecture of 3D CNNs is relatively shallow against to the success of very deep neural networks in 2D-based CNNs, such as residual networks (ResNets). In this paper, we propose a 3D CNNs based on ResNets toward a better action representation. We describe the training procedure of our 3D ResNets in details. We experimentally evaluate the 3D ResNets on the ActivityNet and Kinetics datasets. The 3D ResNets trained on the Kinetics did not suffer from over-fitting despite the large number of parameters of the model, and achieved better performance than relatively shallow networks, such as C3D. Our code and pretrained models (e.g. Kinetics and ActivityNet) are publicly available at https://github.com/kenshohara/3D-ResNets.
引用
收藏
页码:3154 / 3160
页数:7
相关论文
共 22 条
[1]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[2]  
[Anonymous], 2012, CRCV T 12 01
[3]  
[Anonymous], ABS170507750 ARXIV
[4]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
[5]  
[Anonymous], PROC CVPR IEEE
[6]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[7]  
[Anonymous], 2016, AQUACULT RES, DOI DOI 10.1007/S11200-014-0975-2
[8]  
[Anonymous], ABS160404494 ARXIV
[9]  
[Anonymous], 2016, ABS160908675 ARXIV
[10]  
[Anonymous], P IEEE C COMP VIS PA