Skeleton-based human activity recognition using ConvLSTM and guided feature learning

被引:46
作者
Yadav, Santosh Kumar [1 ,2 ,3 ]
Tiwari, Kamlesh [4 ]
Pandey, Hari Mohan [5 ]
Akbar, Shaik Ali [1 ,2 ]
机构
[1] Acad Sci & Innovat Res AcSIR, Ghaziabad 201002, Uttar Pradesh, India
[2] Cent Elect Engn Res Inst CEERI, CSIR, Pilani 333031, Rajasthan, India
[3] DeepBlink LLC, 30 N Gould St Ste R, Sheridan, WY 82801 USA
[4] Birla Inst Technol & Sci Pilani, Dept CSIS, Pilani Campus, Pilani 333031, Rajasthan, India
[5] Edge Hill Univ, Dept Comp Sci, Ormskirk, Lancs, England
关键词
Activity recognition; CNNs; LSTMs; ConvLTM; Skeleton tracking; FALL DETECTION;
D O I
10.1007/s00500-021-06238-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human activity recognition aims to determine actions performed by a human in an image or video. Examples of human activity include standing, running, sitting, sleeping, etc. These activities may involve intricate motion patterns and undesired events such as falling. This paper proposes a novel deep convolutional long short-term memory (ConvLSTM) network for skeletal-based activity recognition and fall detection. The proposed ConvLSTM network is a sequential fusion of convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and fully connected layers. The acquisition system applies human detection and pose estimation to pre-calculate skeleton coordinates from the image/video sequence. The ConvLSTM model uses the raw skeleton coordinates along with their characteristic geometrical and kinematic features to construct the novel guided features. The geometrical and kinematic features are built upon raw skeleton coordinates using relative joint position values, differences between joints, spherical joint angles between selected joints, and their angular velocities. The novel spatiotemporal-guided features are obtained using a trained multi-player CNN-LSTM combination. Classification head including fully connected layers is subsequently applied. The proposed model has been evaluated on the KinectHAR dataset having 130,000 samples with 81 attribute values, collected with the help of a Kinect (v2) sensor. Experimental results are compared against the performance of isolated CNNs and LSTM networks. Proposed ConvLSTM have achieved an accuracy of 98.89% that is better than CNNs and LSTMs having an accuracy of 93.89 and 92.75%, respectively. The proposed system has been tested in realtime and is found to be independent of the pose, facing of the camera, individuals, clothing, etc. The code and dataset will be made publicly available.
引用
收藏
页码:877 / 890
页数:14
相关论文
共 43 条
[1]  
Almaslukh B, 2017, INT J COMPUT SCI NET, V17, P160
[2]  
[Anonymous], 2016, arXiv preprint arXiv:1603.07772
[3]   Fall Detection With Multiple Cameras: An Occlusion-Resistant Method Based on 3-D Silhouette Vertical Distribution [J].
Auvinet, Edouard ;
Multon, Franck ;
Saint-Arnaud, Alain ;
Rousseau, Jacqueline ;
Meunier, Jean .
IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2011, 15 (02) :290-300
[4]   LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT [J].
BENGIO, Y ;
SIMARD, P ;
FRASCONI, P .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02) :157-166
[5]   A novel hierarchical framework for human action recognition [J].
Chen, Hongzhao ;
Wang, Guijin ;
Xue, Jing-Hao ;
He, Li .
PATTERN RECOGNITION, 2016, 55 :148-159
[6]   A Human Activity Recognition System Using Skeleton Data from RGBD Sensors [J].
Cippitelli, Enea ;
Gasparrini, Samuele ;
Gambi, Ennio ;
Spinsante, Susanna .
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2016, 2016
[7]  
DESA, 2013, WORLD POP PROSP 2012, P18
[8]   REConvertor: Transforming Textual Use Cases to High-Level Message Sequence Chart [J].
Ding, Zuohua ;
Shuai, Tiantian ;
Jiang, Mingyue .
2017 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C), 2017, :610-611
[9]  
Du Y, 2015, PROC CVPR IEEE, P1110, DOI 10.1109/CVPR.2015.7298714
[10]   A Depth-Based Fall Detection System Using a Kinect Sensor [J].
Gasparrini, Samuele ;
Cippitelli, Enea ;
Spinsante, Susanna ;
Gambi, Ennio .
SENSORS, 2014, 14 (02) :2756-2775