Human Action Recognition using Skeleton features

被引:1
作者
Patil, Akash Anil [1 ]
Swaminathan, A. [1 ]
Rajan, Ashoka R. [1 ]
Narayanan, Neela, V [1 ]
Gayathri, R. [1 ]
机构
[1] Vellore Inst Technol, Vellore, Tamil Nadu, India
来源
2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT 2022) | 2022年
关键词
Action Recognition; Human Action Recognition; Skeleton-based Action recognition; Skeleton features; Abnormal detection; Human Joint Positions; Convolutional Neural Network; Two-stream architecture; POSE;
D O I
10.1109/ISMAR-Adjunct57072.2022.00066
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Many research works proposed in the field of action recognition based on the pose estimated in each frame. But most of these research works demanded a large amount of computing to store and process the collection of video frames. So we observed that Action recognition is computationally expensive. Our main objective was to find suitable features and classify human behavior based on action recognition. So to address these problems of storing and processing video input RGB frames, we came up with a new approach that is computationally not expensive. We convert the videos into skeleton-based Spatial-temporal Graphical (ST_GCN) frames that can detect human action more accurately than the direct RGB input. To derive solid development designs from these diagrams, a broad range and multi-scale setting total and perceptual demonstrating region unit significant parts of a solid feature extractor is required. We initially used tensor flow pose estimation CMU, Mobile-Net Pre Trained-Models by Open Pose and Tensor flow are used to train on the COCO dataset to extract key feature points. Then, NTU RGB D 60 3D skeleton data from Rapid-Rich Object Research Lab and MS_G3D model, which has more substantial generalization capability, have opted, and performance measured based on the model's accuracy on test data. Moreover, our models could profoundly classify a human's abnormal and normal behavioral patterns of a human based on their actions.
引用
收藏
页码:289 / 296
页数:8
相关论文
共 31 条
[1]  
Battaglia PW, 2016, ADV NEUR IN, V29
[2]  
Cao C., 2018, IEEE T CIRC SYST VID, P1
[3]  
Du Y, 2015, PROCEEDINGS 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION ACPR 2015, P579, DOI 10.1109/ACPR.2015.7486569
[4]  
Fernando B, 2015, PROC CVPR IEEE, P5378, DOI 10.1109/CVPR.2015.7299176
[5]  
Gilmer J, 2017, PR MACH LEARN RES, V70
[6]   Relation Networks for Object Detection [J].
Hu, Han ;
Gu, Jiayuan ;
Zhang, Zheng ;
Dai, Jifeng ;
Wei, Yichen .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3588-3597
[7]   Interpretable 3D Human Action Analysis with Temporal Convolutional Networks [J].
Kim, Tae Soo ;
Reiter, Austin .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1623-1631
[8]  
Kipf T.N., 2018, INT C MACHINE LEARNI
[9]   Tower Crane remote wireless monitoring system based on Modbus/TCP protocol [J].
Li, Bo ;
Chen, Geng ;
Wang, Le ;
Hao, Zhe .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING (CSE) AND IEEE/IFIP INTERNATIONAL CONFERENCE ON EMBEDDED AND UBIQUITOUS COMPUTING (EUC), VOL 2, 2017, :187-190
[10]   Application on Integration Technology of Visualized Hierarchical Information [J].
Li, Weibo ;
He, Yang .
2010 THE 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION (PACIIA2010), VOL I, 2010, :9-12