Action Recognition Based on Multi-Level Topological Channel Attention of Human Skeleton

被引:3
|
作者
Hu, Kai [1 ,2 ]
Shen, Chaowen [1 ]
Wang, Tianyan [1 ]
Shen, Shuai [1 ]
Cai, Chengxue [1 ]
Huang, Huaming [3 ]
Xia, Min [1 ,2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, CICAEET, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Dept Phys Educ, Nanjing 210044, Peoples R China
基金
中国国家自然科学基金;
关键词
skeleton action recognition; temporal modeling; prior knowledge; ENSEMBLE; NETWORK;
D O I
10.3390/s23249738
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In action recognition, obtaining skeleton data from human poses is valuable. This process can help eliminate negative effects of environmental noise, including changes in background and lighting conditions. Although GCN can learn unique action features, it fails to fully utilize the prior knowledge of human body structure and the coordination relations between limbs. To address these issues, this paper proposes a Multi-level Topological Channel Attention Network algorithm: Firstly, the Multi-level Topology and Channel Attention Module incorporates prior knowledge of human body structure using a coarse-to-fine approach, effectively extracting action features. Secondly, the Coordination Module utilizes contralateral and ipsilateral coordinated movements in human kinematics. Lastly, the Multi-scale Global Spatio-temporal Attention Module captures spatiotemporal features of different granularities and incorporates a causal convolution block and masked temporal attention to prevent non-causal relationships. This method achieved accuracy rates of 91.9% (Xsub), 96.3% (Xview), 88.5% (Xsub), and 90.3% (Xset) on NTU-RGB+D 60 and NTU-RGB+D 120, respectively.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Multi-level channel attention excitation network for human action recognition in videos
    Wu, Hanbo
    Ma, Xin
    Li, Yibin
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 114
  • [2] CHAN: Skeleton based action recognition by multi-level feature learning
    Lu, Jian
    Gong, Yinghao
    Zhou, Yanran
    Ma, Chengxian
    Huang, Tingting
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2023, 34 (06)
  • [3] Human Action Recognition Based On Multi-level Feature Fusion
    Xu, Y. Y.
    Xiao, G. Q.
    Tang, X. Q.
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SYSTEMS AND INDUSTRIAL APPLICATIONS (CISIA 2015), 2015, 18 : 353 - 355
  • [4] Adaptive multi-level graph convolution with contrastive learning for skeleton-based action recognition
    Geng, Pei
    Li, Haowei
    Wang, Fuyun
    Lyu, Lei
    SIGNAL PROCESSING, 2022, 201
  • [5] Multi-level Sparse Coding for Human Action Recognition
    Luo, Huiwu
    Lu, Huanzhang
    2016 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC), VOL. 1, 2016, : 460 - 463
  • [6] Temporal-Channel Attention and Convolution Fusion for Skeleton-Based Human Action Recognition
    Liang, Chengwu
    Yang, Jie
    Du, Ruolin
    Hu, Wei
    Hou, Ning
    IEEE ACCESS, 2024, 12 : 64937 - 64948
  • [7] Channel attention and multi-scale graph neural networks for skeleton-based action recognition
    Dang, Ronghao
    Liu, Chengju
    Liu, Ming
    Chen, Qijun
    AI COMMUNICATIONS, 2022, 35 (03) : 187 - 205
  • [8] Learning multi-level features for sensor-based human action recognition
    Xu, Yan
    Shen, Zhengyang
    Zhang, Xin
    Gao, Yifan
    Deng, Shujian
    Wang, Yipei
    Fan, Yubo
    Chang, Eric I-Chao
    PERVASIVE AND MOBILE COMPUTING, 2017, 40 : 324 - 338
  • [9] AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition
    Ma, Haojie
    Li, Wenzhong
    Zhang, Xiao
    Gao, Songcheng
    Lu, Sanglu
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3109 - 3115
  • [10] BODY PART LEVEL ATTENTION MODEL FOR SKELETON-BASED ACTION RECOGNITION
    Zhang, Han
    Song, Yonghong
    Zhang, Yuanlin
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 4297 - 4302