Human Activity Recognition Based on Spatial Distribution of Gradients at Sublevels of Average Energy Silhouette Images

被引:38
|
作者
Vishwakarma, Dinesh Kumar [1 ]
Singh, Kuldeep [2 ]
机构
[1] Delhi Technol Univ, Dept Elect & Commun Engn, Delhi 110042, India
[2] Govt India, Bharat Elect Ltd, Cent Res Lab, Minist Def, Ghaziabad 201010, India
关键词
Computation of spatial distributions; human action analysis; human action recognition; hybrid classifier; texture segmentation; the sum of directional pixels (SDPs); FEATURES; APPEARANCE; SHAPE; TRANSFORM; PATTERN; VISION; SYSTEM; MOTION;
D O I
10.1109/TCDS.2016.2577044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The aim of this paper is to present a unified framework for human action and activity recognition by analysing the effect of computation of spatial distribution of gradients (SDGs) on average energy silhouette images (AESIs). Based on the analysis of SDGs computation at various decomposition levels, an effective approach to compute the SDGs is developed. The AESI is constructed for the representation of the shape of action and activity and these are the reflection of 3-D pose into 2-D pose. To describe the AESIs, the SDGs at various sublevels and sum of the directional pixels (SDPs) variations is computed. The temporal content of the activity is computed through R-transform (RT). Finally, the shape computed through SDGs and SDPs, and temporal evidences through RT of the human body is fused together at the recognition stage, which results in a new powerful unified feature map model. The performance of the proposed framework is evaluated on three different publicly available datasets, i.e., Weizmann, KTH, and Ballet and the recognition accuracy is computed using hybrid classifier. The highest recognition accuracy achieved on these datasets is compared with the similar state-of-the-art techniques and demonstrate the superior performance.
引用
收藏
页码:316 / 327
页数:12
相关论文
共 50 条
  • [21] Trajectory-Based Human Activity Recognition from Videos
    Boufama, Boubakeur
    Habashi, Pejman
    Ahmad, Imran Shafiq
    2017 3RD INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP), 2017, : 32 - 36
  • [22] Spatial-temporal Histograms of Gradients and HOD-VLAD Encoding for Human Action Recognition
    Lin, Bo
    Fang, Bin
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 678 - 683
  • [23] Silhouette Analysis for Human Action Recognition Based on Supervised Temporal t-SNE and Incremental Learning
    Cheng, Jian
    Liu, Haijun
    Wang, Feng
    Li, Hongsheng
    Zhu, Ce
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (10) : 3203 - 3217
  • [24] A Survey of Vision-Based Transfer Learning in Human Activity Recognition
    Adama, David Ada
    Lotfi, Ahmad
    Ranson, Robert
    ELECTRONICS, 2021, 10 (19)
  • [25] Exploratory Data Analysis of Human Activity Recognition Based on Smart Phone
    Kong, Weiheng
    He, Lili
    Wang, Hailong
    IEEE ACCESS, 2021, 9 : 73355 - 73364
  • [26] Human gait recognition based on histogram of oriented gradients and Haralick texture descriptor
    Anusha, R.
    Jaidhar, C. D.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (11-12) : 8213 - 8234
  • [27] Vision Based Human Activity Recognition: A Review
    Bux, Allah
    Angelov, Plamen
    Habib, Zulfiqar
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, 2017, 513 : 341 - 371
  • [28] Human activity recognition based on kinematic features
    Hernandez, J.
    Cabido, R.
    Montemayor, A. S.
    Pantrigo, J. J.
    EXPERT SYSTEMS, 2014, 31 (04) : 345 - 353
  • [29] Human Activity Recognition based on Triaxial Accelerometer
    Zhang, Li
    Liu, Tianchi
    Zhu, Sijun
    Zhu, Zhiliang
    2012 7TH INTERNATIONAL CONFERENCE ON COMPUTING AND CONVERGENCE TECHNOLOGY (ICCCT2012), 2012, : 261 - 266
  • [30] A Novel Energy-Efficient Approach for Human Activity Recognition
    Zheng, Lingxiang
    Wu, Dihong
    Ruan, Xiaoyang
    Weng, Shaolin
    Peng, Ao
    Tang, Biyu
    Lu, Hai
    Shi, Haibin
    Zheng, Huiru
    SENSORS, 2017, 17 (09)