Joint spatial-temporal attention for action recognition

被引:25
作者
Yu, Tingzhao [1 ,2 ]
Guo, Chaoxu [1 ,2 ]
Wang, Lingfeng [1 ]
Gu, Huxiang [1 ]
Xiang, Shiming [1 ]
Pan, Chunhong [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing 101408, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; Spatial-Temporal attention; Two-Stage; REPRESENTATION;
D O I
10.1016/j.patrec.2018.07.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel high-level action representation using joint spatial-temporal attention model, with application to video-based human action recognition. Specifically, to extract robust motion representations of videos, a new spatial attention module based on 3D convolution is proposed, which can pay attention to the salient parts of the spatial areas. For better dealing with long-duration videos, a new bidirectional LSTM based temporal attention module is introduced, which aims to focus on the key video cubes instead of the key video frames of a given video. The spatial-temporal attention network can be jointly trained via a two-stage strategy, which enables us to simultaneously explore the correlation both in spatial and temporal domain. Experimental results on benchmark action recognition datasets demonstrate the effectiveness of our network. (c) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:226 / 233
页数:8
相关论文
共 62 条
[1]   VISUAL-MOTION PERCEPTION [J].
ALBRIGHT, TD ;
STONER, GR .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1995, 92 (07) :2433-2440
[2]  
[Anonymous], 2017, CORR
[3]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[4]  
[Anonymous], CVPR
[5]  
[Anonymous], PROC CVPR IEEE
[6]  
[Anonymous], 2012, UCF101 DATASET 101 H
[7]  
[Anonymous], 2015, CORR
[8]  
[Anonymous], 2015, Advances in neural information processing systems
[9]  
[Anonymous], CORR
[10]  
[Anonymous], ICCV