Human action recognition based on spatial-temporal relational model and LSTM-CNN framework

被引:7
|
作者
Senthilkumar, N. [1 ]
Manimegalai, M. [2 ]
Karpakam, S. [3 ]
Ashokkumar, S. R. [3 ]
Premkumar, M. [4 ]
机构
[1] Dr NGP Inst Technol, Dept ECE, Coimbatore, Tamil Nadu, India
[2] Mahendra Engn Coll Women, Dept ECE, Namakkal, India
[3] Sri Eshwar Coll Engn, Dept ECE, Coimbatore, Tamil Nadu, India
[4] SSM Inst Engn & Technol, Dept ECE, Dindigul, India
关键词
Action recognition; Dilated bi-directional LSTM; CNN;
D O I
10.1016/j.matpr.2021.12.004
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Due to the increasing popularity of human skeleton capture systems, many new methods for implementing skeleton-based action recognition has been proposed. Some of these include Long Term Memory and Convolutional Neural Networks. These methods can investigate the significant spatial-temporal information, but they are limited in their capacity to do so in real-world scenarios. In this paper, a new spatialtemporal model with a bi-temporal end-to-end framework is proposed. A novel structure is proposed to combine the functions LSTM and CNN. The structure uses the dependency model to build the skeleton data for the proposed network. Copyright (C) 2022 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the International Conference on Innovation and Application in Science and Technology
引用
收藏
页码:2087 / 2091
页数:5
相关论文
共 50 条
  • [21] Spatial-temporal saliency action mask attention network for action recognition
    Jiang, Min
    Pan, Na
    Kong, Jun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 71
  • [22] Spatial-temporal graph attention networks for skeleton-based action recognition
    Huang, Qingqing
    Zhou, Fengyu
    He, Jiakai
    Zhao, Yang
    Qin, Runze
    JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (05)
  • [23] Focal and Global Spatial-Temporal Transformer for Skeleton-Based Action Recognition
    Gao, Zhimin
    Wang, Peitao
    Lv, Pei
    Jiang, Xiaoheng
    Liu, Qidong
    Wang, Pichao
    Xu, Mingliang
    Li, Wanqing
    COMPUTER VISION - ACCV 2022, PT IV, 2023, 13844 : 155 - 171
  • [24] Deep Spatial-Temporal Model Based Cross-Scene Action Recognition Using Commodity WiFi
    Sheng, Biyun
    Xiao, Fu
    Sha, Letian
    Sun, Lijuan
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (04) : 3592 - 3601
  • [25] STST: Spatial-Temporal Specialized Transformer for Skeleton-based Action Recognition
    Zhang, Yuhan
    Wu, Bo
    Li, Wen
    Duan, Lixin
    Gan, Chuang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3229 - 3237
  • [26] Robust Human Action Recognition Using Global Spatial-Temporal Attention for Human Skeleton Data
    Han, Yun
    Chung, Sheng-Luen
    Ambikapathi, ArulMurugan
    Chan, Jui-Shan
    Lin, Wei-You
    Su, Shun-Feng
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [27] A Spatial-Temporal Feature Fusion Strategy for Skeleton-Based Action Recognition
    Chen, Yitian
    Xu, Yuchen
    Xie, Qianglai
    Xiong, Lei
    Yao, Leiyue
    2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP, 2023, : 207 - 215
  • [28] Human action recognition using attention based LSTM network with dilated CNN features
    Muhammad, Khan
    Mustaqeem
    Ullah, Amin
    Imran, Ali Shariq
    Sajjad, Muhammad
    Kiran, Mustafa Servet
    Sannino, Giovanna
    de Albuquerque, Victor Hugo C.
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 820 - 830
  • [29] Research on Urban Road Mean Speed Prediction Method Based on LSTM-CNN Model
    Zhao, Ke
    Yuan, Shaoxin
    Wang, Zhuanzhuan
    Wang, Jiaxuan
    2022 IEEE 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION ENGINEERING, ICITE, 2022, : 365 - 371
  • [30] An End-to-End Spatial-Temporal Transformer Model for Surgical Action Triplet Recognition
    Zou, Xiaoyang
    Yu, Derong
    Tao, Rong
    Zheng, Guoyan
    12TH ASIAN-PACIFIC CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING, VOL 2, APCMBE 2023, 2024, 104 : 114 - 120