Robust Human Action Recognition Using Global Spatial-Temporal Attention for Human Skeleton Data

被引:0
作者
Han, Yun [1 ,2 ]
Chung, Sheng-Luen [1 ]
Ambikapathi, ArulMurugan [3 ]
Chan, Jui-Shan [1 ]
Lin, Wei-You [1 ]
Su, Shun-Feng [1 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Taipei, Taiwan
[2] Neijiang Normal Univ, Neijiang, Peoples R China
[3] UTECHZONE, Taipei, Taiwan
来源
2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2018年
关键词
Human action recognition; global attention model; accumulative learning curve; action recognition; LSTM; spatial-temporal attention;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition from video sequences is one of the most challenging computer vision applications, primarily owing to intrinsic variations in lighting, pose, occlusions, and other factors. The human skeleton joints extracted by the depth camera Kinect have the advantages of simplified structures and rich contents, and are therefore widely used for capturing human actions. However, at present, most of the skeletal joint and Deep learning based action recognition methods treat all skeletal joints equally in both spatial and temporal dimensions. Logically, this is not in accordance with the fact that for different human actions the contributions from skeletal joints could significantly vary spatially and temporally. Incorporating information pertaining to such natural variations will certainly aid in designing a robust human action recognitions system. Hence, in this work, we endeavor to propose a global spatial attention (GSA) model to suitably express the different skeletal joints with different weights so as to provide precise spatial information for human action recognition. Further, we will introduce the notion of accumulative learning curve (ALC) model that can highlight which frames contribute most to the final decision by giving varying temporal weights to each intermediate accumulated learning results provided by an LSTM upon input frames. The proposed GSA (for spatial information) and ALC (for temporal processing) models are integrated into the LSTM framework to construct a robust action recognition framework that takes the human skeletal joints as input and predicts the human action using the enhanced spatial-temporal attention model. Rigorous experiments on NTU datasets (by-far the largest benchmark RGB-D dataset) show that the proposed framework offers the best performance accuracy, least algorithmic complexity and training overheads, when compared with other state-of-the-art human action recognition models.
引用
收藏
页数:8
相关论文
共 30 条
[1]   Human activity recognition from 3D data: A review [J].
Aggarwal, J. K. ;
Xia, Lu .
PATTERN RECOGNITION LETTERS, 2014, 48 :70-80
[2]   Human Activity Analysis: A Review [J].
Aggarwal, J. K. ;
Ryoo, M. S. .
ACM COMPUTING SURVEYS, 2011, 43 (03)
[3]  
[Anonymous], 2014, Advances in neural information processing systems
[4]  
Ba J., 2015, P INT C LEARN REPR I, P112
[5]  
Bahdanau D., 2015, P INT C LEARN REPR I, P214
[6]   Real-time human action recognition based on depth motion maps [J].
Chen, Chen ;
Liu, Kui ;
Kehtarnavaz, Nasser .
JOURNAL OF REAL-TIME IMAGE PROCESSING, 2016, 12 (01) :155-163
[7]  
Chorowski J, 2015, ADV NEUR IN, V28
[8]   Skeletal Quads: Human Action Recognition Using Joint Quadruples [J].
Evangelidis, Georgios ;
Singh, Gurkirt ;
Horaud, Radu .
2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, :4513-4518
[9]  
Glorot X., 2010, P 13 INT C ART INT S, P249
[10]  
Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947