Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints

被引:34
作者
Tasnim, Nusrat [1 ]
Islam, Mohammad Khairul [2 ]
Baek, Joong-Hwan [1 ]
机构
[1] Korea Aerosp Univ, Sch Elect & Informat Engn, Goyang 10540, South Korea
[2] Univ Chittagong, Dept Comp Sci & Engn, Chittagong 4331, Bangladesh
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 06期
关键词
spatio-temporal image formation; human activity recognition; deep learning; fusion strategies; transfer learning; SYSTEM;
D O I
10.3390/app11062675
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Human activity recognition has become a significant research trend in the fields of computer vision, image processing, and human-machine or human-object interaction due to cost-effectiveness, time management, rehabilitation, and the pandemic of diseases. Over the past years, several methods published for human action recognition using RGB (red, green, and blue), depth, and skeleton datasets. Most of the methods introduced for action classification using skeleton datasets are constrained in some perspectives including features representation, complexity, and performance. However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset. There is a lot of room to map the 3D skeleton joint coordinates into spatio-temporal formats to reduce the complexity of the system, to provide a more accurate system to recognize human behaviors, and to improve the overall performance. In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination. We conduct transfer learning (pretrained models- MobileNetV2, DenseNet121, and ResNet18 trained with ImageNet dataset) to extract discriminative features and evaluate the proposed method with several fusion techniques. We mainly investigate the effect of three fusion methods such as element-wise average, multiplication, and maximization on the performance variation to human action recognition. Our deep learning-based method outperforms prior works using UTD-MHAD (University of Texas at Dallas multi-modal human action dataset) and MSR-Action3D (Microsoft action 3D), publicly available benchmark 3D skeleton datasets with STIF representation. We attain accuracies of approximately 98.93%, 99.65%, and 98.80% for UTD-MHAD and 96.00%, 98.75%, and 97.08% for MSR-Action3D skeleton datasets using MobileNetV2, DenseNet121, and ResNet18, respectively.
引用
收藏
页数:24
相关论文
共 67 条
  • [31] Joint Distance Maps Based Action Recognition With Convolutional Neural Networks
    Li, Chuankun
    Hou, Yonghong
    Wang, Pichao
    Li, Wanqing
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (05) : 624 - 628
  • [32] Application on Integration Technology of Visualized Hierarchical Information
    Li, Weibo
    He, Yang
    [J]. 2010 THE 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION (PACIIA2010), VOL I, 2010, : 9 - 12
  • [33] Li Yandong, 2016, Journal of Computer Applications, V36, P2508, DOI 10.11772/j.issn.1001-9081.2016.09.2508
  • [34] Liu J., 2020, ARXIV2020201211866
  • [35] Liu J., 2017, P CVPR WORKSH LONG B, P10
  • [36] Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
    Liu, Jun
    Shahroudy, Amir
    Xu, Dong
    Wang, Gang
    [J]. COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 : 816 - 833
  • [37] 3D skeleton-based human action classification: A survey
    Lo Presti, Liliana
    La Cascia, Marco
    [J]. PATTERN RECOGNITION, 2016, 53 : 130 - 147
  • [38] Lowe D. G., 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision, P1150, DOI 10.1109/ICCV.1999.790410
  • [39] HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences
    Oreifej, Omar
    Liu, Zicheng
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 716 - 723
  • [40] Ren J, 2018, 2018 IEEE 3RD INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC), P199, DOI 10.1109/ICIVC.2018.8492894