STA-CNN: Convolutional Spatial-Temporal Attention Learning for Action Recognition

被引:60
作者
Yang, Hao [1 ]
Yuan, Chunfeng [2 ]
Zhang, Li [3 ]
Sun, Yunda [1 ]
Hu, Weiming [4 ,5 ]
Maybank, Stephen J. [6 ]
机构
[1] Nuctech Co Ltd, R&D Ctr Artificial Intelligent, Beijing 100084, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[3] Tsinghua Univ, Dept Engn Phys, Beijing 100084, Peoples R China
[4] Chinese Acad Sci, Natl Lab Pattern Recognit, Ctr Excellence Brain Sci & Intelligence Technol, Inst Automat, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[6] Univ London, Birkbeck Coll, Dept Comp Sci & Informat Syst, London WC1E 7HX, England
关键词
Videos; Feature extraction; Motion segmentation; Computational modeling; Image recognition; Solid modeling; Convolutional neural networks; Temporal attention; spatial attention; convolutional neural network; action recognition;
D O I
10.1109/TIP.2020.2984904
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional Neural Networks have achieved excellent successes for object recognition in still images. However, the improvement of Convolutional Neural Networks over the traditional methods for recognizing actions in videos is not so significant, because the raw videos usually have much more redundant or irrelevant information than still images. In this paper, we propose a Spatial-Temporal Attentive Convolutional Neural Network (STA-CNN) which selects the discriminative temporal segments and focuses on the informative spatial regions automatically. The STA-CNN model incorporates a Temporal Attention Mechanism and a Spatial Attention Mechanism into a unified convolutional network to recognize actions in videos. The novel Temporal Attention Mechanism automatically mines the discriminative temporal segments from long and noisy videos. The Spatial Attention Mechanism firstly exploits the instantaneous motion information in optical flow features to locate the motion salient regions and it is then trained by an auxiliary classification loss with a Global Average Pooling layer to focus on the discriminative non-motion regions in the video frame. The STA-CNN model achieves the state-of-the-art performance on two of the most challenging datasets, UCF-101 (95.8%) and HMDB-51 (71.5%).
引用
收藏
页码:5783 / 5793
页数:11
相关论文
共 56 条
[1]  
[Anonymous], 2017, IEEE T SERV COMPUT, DOI DOI 10.1109/TSC.2015.2511002
[2]  
[Anonymous], ARXIV 1607 06416
[3]  
[Anonymous], P IEEE INT C COMP VI
[4]  
[Anonymous], 2015, ARXIV150702159
[5]  
[Anonymous], P INT C LEARN REPR
[6]  
[Anonymous], 2012, CoRR
[7]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[8]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[9]   P-CNN: Pose-based CNN Features for Action Recognition [J].
Cheron, Guilhem ;
Laptev, Ivan ;
Schmid, Cordelia .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :3218-3226
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848