Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints

被引:34
作者
Tasnim, Nusrat [1 ]
Islam, Mohammad Khairul [2 ]
Baek, Joong-Hwan [1 ]
机构
[1] Korea Aerosp Univ, Sch Elect & Informat Engn, Goyang 10540, South Korea
[2] Univ Chittagong, Dept Comp Sci & Engn, Chittagong 4331, Bangladesh
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 06期
关键词
spatio-temporal image formation; human activity recognition; deep learning; fusion strategies; transfer learning; SYSTEM;
D O I
10.3390/app11062675
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Human activity recognition has become a significant research trend in the fields of computer vision, image processing, and human-machine or human-object interaction due to cost-effectiveness, time management, rehabilitation, and the pandemic of diseases. Over the past years, several methods published for human action recognition using RGB (red, green, and blue), depth, and skeleton datasets. Most of the methods introduced for action classification using skeleton datasets are constrained in some perspectives including features representation, complexity, and performance. However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset. There is a lot of room to map the 3D skeleton joint coordinates into spatio-temporal formats to reduce the complexity of the system, to provide a more accurate system to recognize human behaviors, and to improve the overall performance. In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination. We conduct transfer learning (pretrained models- MobileNetV2, DenseNet121, and ResNet18 trained with ImageNet dataset) to extract discriminative features and evaluate the proposed method with several fusion techniques. We mainly investigate the effect of three fusion methods such as element-wise average, multiplication, and maximization on the performance variation to human action recognition. Our deep learning-based method outperforms prior works using UTD-MHAD (University of Texas at Dallas multi-modal human action dataset) and MSR-Action3D (Microsoft action 3D), publicly available benchmark 3D skeleton datasets with STIF representation. We attain accuracies of approximately 98.93%, 99.65%, and 98.80% for UTD-MHAD and 96.00%, 98.75%, and 97.08% for MSR-Action3D skeleton datasets using MobileNetV2, DenseNet121, and ResNet18, respectively.
引用
收藏
页数:24
相关论文
共 67 条
  • [1] Human action recognition using trajectory-based representation
    Abdul-Azim, Haiam A.
    Hemayed, Elsayed E.
    [J]. EGYPTIAN INFORMATICS JOURNAL, 2015, 16 (02) : 187 - 198
  • [2] Improving bag-of-poses with semi-temporal pose descriptors for skeleton-based action recognition
    Agahian, Saeid
    Negin, Farhood
    Kose, Cemal
    [J]. VISUAL COMPUTER, 2019, 35 (04) : 591 - 607
  • [3] Local and Global Feature Descriptors Combination from RGB-Depth Videos for Human Action Recognition
    Al-Akam, Rawya
    Paulus, Dietrich
    [J]. PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM 2018), 2018, : 265 - 272
  • [4] A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
    Andre Chaaraoui, Alexandros
    Ramon Padilla-Lopez, Jose
    Javier Ferrandez-Pastor, Francisco
    Nieto-Hidalgo, Mario
    Florez-Revuelta, Francisco
    [J]. SENSORS, 2014, 14 (05) : 8895 - 8925
  • [5] [Anonymous], 2018, IEEE T IMAGE PROCESS, DOI DOI 10.1109/TIP.2018.2791180
  • [6] [Anonymous], 2012, 2012 IEEE COMPUTER S
  • [7] Arivazhagan S, 2019, COGN SYST RES, V58, P94, DOI [10.1016/j.cogsys.2019.15.002, 10.1016/j.cogsys.2019.05.002]
  • [8] Atri M., 2016, P 11 INT DES TEST S, DOI [10.1109/IDT.2016.7843019, DOI 10.1109/IDT.2016.7843019]
  • [9] A survey of augmented reality
    Azuma, RT
    [J]. PRESENCE-VIRTUAL AND AUGMENTED REALITY, 1997, 6 (04): : 355 - 385
  • [10] Speeded-Up Robust Features (SURF)
    Bay, Herbert
    Ess, Andreas
    Tuytelaars, Tinne
    Van Gool, Luc
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) : 346 - 359