Attentional weighting strategy-based dynamic GCN for skeleton-based action recognition

被引:0
|
作者
Kai Hu
Junlan Jin
Chaowen Shen
Min Xia
Liguo Weng
机构
[1] Nanjing University of Information Science and Technology,School of Automation
[2] Nanjing University of Information Science and Technology,Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET)
来源
Multimedia Systems | 2023年 / 29卷
关键词
Skeleton-based action recognition; Graph topology; Position feature;
D O I
暂无
中图分类号
学科分类号
摘要
Graph Convolutional Networks (GCNs) have become the standard skeleton-based human action recognition research paradigm. As a core component in graph convolutional networks, the construction of graph topology often significantly impacts the accuracy of classification. Considering that the fixed physical graph topology cannot capture the non-physical connection relationship of the human body, existing methods capture more flexible node relationships by constructing dynamic graph structures. This paper proposes a novel attentional weighting strategy-based dynamic GCN (AWD-GCN). We construct a new dynamic adjacency matrix, which uses the attention weighting mechanism to simultaneously capture the dynamic relationships among the three partitions of the human skeleton under multiple actions to extract the discriminative action features fully. In addition, considering the importance of skeletal node position features for action differentiation, we propose new multi-scale position attention and multi-level attention. We use a multi-scale modelling method to capture the complex relationship between skeletal node position features, which is helpful in distinguishing human action in different spatial scales. Extensive experiments on two challenging datasets, NTU-RGB+D and Skeleton-Kinetics, demonstrate the effectiveness and superiority of our method.
引用
收藏
页码:1941 / 1954
页数:13
相关论文
共 50 条
  • [31] Insight on Attention Modules for Skeleton-Based Action Recognition
    Jiang, Quanyan
    Wu, Xiaojun
    Kittler, Josef
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 242 - 255
  • [32] Research Progress in Skeleton-Based Human Action Recognition
    Liu B.
    Zhou S.
    Dong J.
    Xie M.
    Zhou S.
    Zheng T.
    Zhang S.
    Ye X.
    Wang X.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (09): : 1299 - 1322
  • [33] Temporal Extension Module for Skeleton-Based Action Recognition
    Obinata, Yuya
    Yamamoto, Takuma
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 534 - 540
  • [34] Adversarial Attack on Skeleton-Based Human Action Recognition
    Liu, Jian
    Akhtar, Naveed
    Mian, Ajmal
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1609 - 1622
  • [35] Skeleton-based action recognition with extreme learning machines
    Chen, Xi
    Koskela, Markus
    NEUROCOMPUTING, 2015, 149 : 387 - 396
  • [36] Profile HMMs for skeleton-based human action recognition
    Ding, Wenwen
    Liu, Kai
    Fu, Xujia
    Cheng, Fei
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 42 : 109 - 119
  • [37] A Novel Skeleton Spatial Pyramid Model for Skeleton-based Action Recognition
    Li, Yanshan
    Guo, Tianyu
    Xia, Rongjie
    Liu, Xing
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 16 - 20
  • [38] Skeleton-based Action Recognition with Graph Involution Network
    Tang, Zhihao
    Xia, Hailun
    Gao, Xinkai
    Gao, Feng
    Feng, Chunyan
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3348 - 3354
  • [39] Bootstrapped Representation Learning for Skeleton-Based Action Recognition
    Moliner, Olivier
    Huang, Sangxia
    Astrom, Kalle
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4153 - 4163
  • [40] Convolutional relation network for skeleton-based action recognition
    Zhu, Jiagang
    Zou, Wei
    Zhu, Zheng
    Hu, Yiming
    NEUROCOMPUTING, 2019, 370 : 109 - 117