Action Recognition Based on Multi-Level Topological Channel Attention of Human Skeleton

被引:3
|
作者
Hu, Kai [1 ,2 ]
Shen, Chaowen [1 ]
Wang, Tianyan [1 ]
Shen, Shuai [1 ]
Cai, Chengxue [1 ]
Huang, Huaming [3 ]
Xia, Min [1 ,2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, CICAEET, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Dept Phys Educ, Nanjing 210044, Peoples R China
基金
中国国家自然科学基金;
关键词
skeleton action recognition; temporal modeling; prior knowledge; ENSEMBLE; NETWORK;
D O I
10.3390/s23249738
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In action recognition, obtaining skeleton data from human poses is valuable. This process can help eliminate negative effects of environmental noise, including changes in background and lighting conditions. Although GCN can learn unique action features, it fails to fully utilize the prior knowledge of human body structure and the coordination relations between limbs. To address these issues, this paper proposes a Multi-level Topological Channel Attention Network algorithm: Firstly, the Multi-level Topology and Channel Attention Module incorporates prior knowledge of human body structure using a coarse-to-fine approach, effectively extracting action features. Secondly, the Coordination Module utilizes contralateral and ipsilateral coordinated movements in human kinematics. Lastly, the Multi-scale Global Spatio-temporal Attention Module captures spatiotemporal features of different granularities and incorporates a causal convolution block and masked temporal attention to prevent non-causal relationships. This method achieved accuracy rates of 91.9% (Xsub), 96.3% (Xview), 88.5% (Xsub), and 90.3% (Xset) on NTU-RGB+D 60 and NTU-RGB+D 120, respectively.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Multi-Term Attention Networks for Skeleton-Based Action Recognition
    Diao, Xiaolei
    Li, Xiaoqiang
    Huang, Chen
    APPLIED SCIENCES-BASEL, 2020, 10 (15):
  • [2] Extended multi-stream temporal-attention module for skeleton-based human action recognition (HAR)
    Mehmood, Faisal
    Guo, Xin
    Chen, Enqing
    Akbar, Muhammad Azeem
    Khan, Arif Ali
    Ullah, Sami
    COMPUTERS IN HUMAN BEHAVIOR, 2025, 163
  • [3] Human Action Recognition Based on Skeleton Information and Multi-Feature Fusion
    Wang, Li
    Su, Bo
    Liu, Qunpo
    Gao, Ruxin
    Zhang, Jianjun
    Wang, Guodong
    ELECTRONICS, 2023, 12 (17)
  • [4] Skeleton-Based Human Action Recognition With Global Context-Aware Attention LSTM Networks
    Liu, Jun
    Wang, Gang
    Duan, Ling-Yu
    Abdiyeva, Kamila
    Kot, Alex C.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) : 1586 - 1599
  • [5] View transform graph attention recurrent networks for skeleton-based action recognition
    Huang, Qingqing
    Zhou, Fengyu
    Qin, Runze
    Zhao, Yang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (03) : 599 - 606
  • [6] Human Action Recognition Based on 3D Convolution and Multi-Attention Transformer
    Liu, Minghua
    Li, Wenjing
    He, Bo
    Wang, Chuanxu
    Qu, Lianen
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [7] A Discriminative Dual-Stream Model With a Novel Sustained Attention Mechanism for Skeleton-Based Human Action Recognition
    Liang, Zhihong
    Shi, Xiaoshan
    Zhang, Yanxin
    Liu, Bo
    IEEE ACCESS, 2020, 8 (08): : 208395 - 208406
  • [8] 2D Human Skeleton Action Recognition Based on Depth Estimation
    Wang, Lei
    Yang, Shanmin
    Zhang, Jianwei
    Gu, Song
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (07) : 869 - 877
  • [9] Human Action Recognition Utilizing Variations in Skeleton Dimensions
    Moussa, Mona M.
    Hemayed, Elsayed E.
    El Nemr, Heba A.
    Fayek, Magda B.
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2018, 43 (02) : 597 - 610
  • [10] Frequency-driven channel attention-augmented full-scale temporal modeling network for skeleton-based action recognition
    Li, Fanjia
    Zhu, Aichun
    Li, Juanjuan
    Xu, Yonggang
    Zhang, Yandong
    Yin, Hongsheng
    Hua, Gang
    KNOWLEDGE-BASED SYSTEMS, 2022, 256