Attentional weighting strategy-based dynamic GCN for skeleton-based action recognition

被引:0
|
作者
Kai Hu
Junlan Jin
Chaowen Shen
Min Xia
Liguo Weng
机构
[1] Nanjing University of Information Science and Technology,School of Automation
[2] Nanjing University of Information Science and Technology,Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET)
来源
Multimedia Systems | 2023年 / 29卷
关键词
Skeleton-based action recognition; Graph topology; Position feature;
D O I
暂无
中图分类号
学科分类号
摘要
Graph Convolutional Networks (GCNs) have become the standard skeleton-based human action recognition research paradigm. As a core component in graph convolutional networks, the construction of graph topology often significantly impacts the accuracy of classification. Considering that the fixed physical graph topology cannot capture the non-physical connection relationship of the human body, existing methods capture more flexible node relationships by constructing dynamic graph structures. This paper proposes a novel attentional weighting strategy-based dynamic GCN (AWD-GCN). We construct a new dynamic adjacency matrix, which uses the attention weighting mechanism to simultaneously capture the dynamic relationships among the three partitions of the human skeleton under multiple actions to extract the discriminative action features fully. In addition, considering the importance of skeletal node position features for action differentiation, we propose new multi-scale position attention and multi-level attention. We use a multi-scale modelling method to capture the complex relationship between skeletal node position features, which is helpful in distinguishing human action in different spatial scales. Extensive experiments on two challenging datasets, NTU-RGB+D and Skeleton-Kinetics, demonstrate the effectiveness and superiority of our method.
引用
收藏
页码:1941 / 1954
页数:13
相关论文
共 50 条
  • [1] Attentional weighting strategy-based dynamic GCN for skeleton-based action recognition
    Hu, Kai
    Jin, Junlan
    Shen, Chaowen
    Xia, Min
    Weng, Liguo
    MULTIMEDIA SYSTEMS, 2023, 29 (04) : 1941 - 1954
  • [2] Skeleton-based action recognition with JRR-GCN
    Ye, Fanfan
    Tang, Huiming
    ELECTRONICS LETTERS, 2019, 55 (17) : 933 - 935
  • [3] Fully Attentional Network for Skeleton-Based Action Recognition
    Liu, Caifeng
    Zhou, Hongcheng
    IEEE ACCESS, 2023, 11 : 20478 - 20485
  • [4] HybridNet: Integrating GCN and CNN for skeleton-based action recognition
    Wenjie Yang
    Jianlin Zhang
    Jingju Cai
    Zhiyong Xu
    Applied Intelligence, 2023, 53 : 574 - 585
  • [5] A GCN and Transformer complementary network for skeleton-based action recognition
    Xiang, Xuezhi
    Li, Xiaoheng
    Liu, Xuzhao
    Qiao, Yulong
    El Saddik, Abdulmotaleb
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [6] HybridNet: Integrating GCN and CNN for skeleton-based action recognition
    Yang, Wenjie
    Zhang, Jianlin
    Cai, Jingju
    Xu, Zhiyong
    APPLIED INTELLIGENCE, 2023, 53 (01) : 574 - 585
  • [7] Skeleton-Based ST-GCN for Human Action Recognition With Extended Skeleton Graph and Partitioning Strategy
    WANG, Q. U. A. N. Y. U.
    ZHANG, K. A. I. X. I. A. N. G.
    ASGHAR, M. A. N. J. O. T. H. O. A. L., I
    IEEE ACCESS, 2022, 10 : 41403 - 41410
  • [8] Dynamic GCN: Context-enriched Topology Learning for Skeleton-based Action Recognition
    Ye, Fanfan
    Pu, Shiliang
    Zhong, Qiaoyong
    Li, Chao
    Xie, Di
    Tang, Huiming
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 55 - 63
  • [9] SparseShift-GCN: High precision skeleton-based action recognition
    Zang, Ying
    Yang, Dongsheng
    Liu, Tianjiao
    Li, Hui
    Zhao, Shuguang
    Liu, Qingshan
    PATTERN RECOGNITION LETTERS, 2022, 153 : 136 - 143
  • [10] A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition
    Zhang, Jiaxu
    Ye, Gaoxiang
    Tu, Zhigang
    Qin, Yongtao
    Qin, Qianqing
    Zhang, Jinlu
    Liu, Jun
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2022, 7 (01) : 46 - 55