Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints

被引:34
作者
Tasnim, Nusrat [1 ]
Islam, Mohammad Khairul [2 ]
Baek, Joong-Hwan [1 ]
机构
[1] Korea Aerosp Univ, Sch Elect & Informat Engn, Goyang 10540, South Korea
[2] Univ Chittagong, Dept Comp Sci & Engn, Chittagong 4331, Bangladesh
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 06期
关键词
spatio-temporal image formation; human activity recognition; deep learning; fusion strategies; transfer learning; SYSTEM;
D O I
10.3390/app11062675
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Human activity recognition has become a significant research trend in the fields of computer vision, image processing, and human-machine or human-object interaction due to cost-effectiveness, time management, rehabilitation, and the pandemic of diseases. Over the past years, several methods published for human action recognition using RGB (red, green, and blue), depth, and skeleton datasets. Most of the methods introduced for action classification using skeleton datasets are constrained in some perspectives including features representation, complexity, and performance. However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset. There is a lot of room to map the 3D skeleton joint coordinates into spatio-temporal formats to reduce the complexity of the system, to provide a more accurate system to recognize human behaviors, and to improve the overall performance. In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination. We conduct transfer learning (pretrained models- MobileNetV2, DenseNet121, and ResNet18 trained with ImageNet dataset) to extract discriminative features and evaluate the proposed method with several fusion techniques. We mainly investigate the effect of three fusion methods such as element-wise average, multiplication, and maximization on the performance variation to human action recognition. Our deep learning-based method outperforms prior works using UTD-MHAD (University of Texas at Dallas multi-modal human action dataset) and MSR-Action3D (Microsoft action 3D), publicly available benchmark 3D skeleton datasets with STIF representation. We attain accuracies of approximately 98.93%, 99.65%, and 98.80% for UTD-MHAD and 96.00%, 98.75%, and 97.08% for MSR-Action3D skeleton datasets using MobileNetV2, DenseNet121, and ResNet18, respectively.
引用
收藏
页数:24
相关论文
共 67 条
  • [41] MobileNetV2: Inverted Residuals and Linear Bottlenecks
    Sandler, Mark
    Howard, Andrew
    Zhu, Menglong
    Zhmoginov, Andrey
    Chen, Liang-Chieh
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4510 - 4520
  • [43] Skeleton-based action recognition with hierarchical spatial reasoning and temporal stack learning network
    Si, Chenyang
    Jing, Ya
    Wang, Wei
    Wang, Liang
    Tan, Tieniu
    [J]. PATTERN RECOGNITION, 2020, 107
  • [44] Simonyan K, 2014, ADV NEUR IN, V27
  • [45] Song SJ, 2017, AAAI CONF ARTIF INTE, P4263
  • [46] Deep Learning-Based Action Recognition Using 3D Skeleton Joints Information
    Tasnim, Nusrat
    Islam, Md. Mahbubul
    Baek, Joong-Hwan
    [J]. INVENTIONS, 2020, 5 (03) : 1 - 15
  • [47] Convolutional Neural Network-Based Action Recognition on Depth Maps
    Trelinski, Jacek
    Kwolek, Bogdan
    [J]. COMPUTER VISION AND GRAPHICS ( ICCVG 2018), 2018, 11114 : 209 - 221
  • [48] Human Action Recognition by Representing 3D Skeletons as Points in a Lie Group
    Vemulapalli, Raviteja
    Arrate, Felipe
    Chellappa, Rama
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 588 - 595
  • [49] Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks
    Wang, Hongsong
    Wang, Liang
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3633 - 3642
  • [50] Action Recognition Based on Joint Trajectory Maps Using Convolutional Neural Networks
    Wang, Pichao
    Li, Zhaoyang
    Hou, Yonghong
    Li, Wanqing
    [J]. MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, : 97 - 106