AGMS-GCN: Attention-guided multi-scale graph convolutional networks for skeleton-based action recognition

被引:6
作者
Kilic, Ugur [1 ,2 ]
Karadag, Ozge Oztimur [3 ]
Ozyer, Gulsah Tumuklu [2 ]
机构
[1] Erzurum Tech Univ, Dept Comp Engn, Erzurum, Turkiye
[2] Ataturk Univ, Dept Comp Engn, Erzurum, Turkiye
[3] Alanya Alaaddin Keykubat Univ, Dept Comp Engn, Antalya, Turkiye
关键词
Action recognition; Skeletal data; Graph convolutional networks; Attention mechanism; Multi-scale;
D O I
10.1016/j.knosys.2025.113045
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Convolutional Networks have the capability to model non-Euclidean data with high effectiveness. Due to this capability, they perform well on standard benchmarks for skeleton-based action recognition (SBAR). Specifically, spatial-temporal graph convolutional networks (ST-GCNs) function effectively in learning spatial- temporal relationships on skeletal graph patterns. In ST-GCN models, a fixed skeletal graph pattern is used across all layers. ST-GCN models obtain spatial-temporal features by performing standard convolution on this fixed graph topology within a local neighborhood limited by the size of the convolution kernel. This convolution kernel dimension can only model dependencies between joints at short distances and shortrange temporal dependencies. However, it fails to model long-range temporal information and long-distance joint dependencies. Effectively capturing these dependencies is key to improving the performance of ST-GCN models. In this study, we propose AGMS-GCN, an attention-guided multi-scale graph convolutional network structure that dynamically determines the weights of the dependencies between joints. In the proposed AGMSGCN architecture, new adjacency matrices that represent action-specific joint relationships are generated by obtaining spatial-temporal dependencies with the attention mechanism on the feature maps extracted using spatial-temporal graph convolutions. This enables the extraction of features that take into account both the shortand long-range spatial-temporal relationship between action-specific joints. This data-driven graph construction method provides amore robust graph representation for capturing subtle differences between different actions. In addition, actions occur through the coordinated movement of multiple body joints. However, most existing SBAR approaches overlook this coordination, considering the skeletal graph from a single-scale perspective. Consequently, these methods miss high-level contextual features necessary for distinguishing actions. The AGMS-GCN architecture addresses this shortcoming with its multi-scale structure. Comprehensive experiments demonstrate that our proposed method attains state-of-the-art (SOTA) performance on the NTU RGB+D 60 and Northwestern-UCLA datasets. It also achieves SOTA competitive performance on the NTU RGB+D 120 dataset. The source code of the proposed AGMS-GCN model is available at: https: //github.com/ugrkilc/AGMS-GCN.
引用
收藏
页数:12
相关论文
共 49 条
[1]   An efficient human action recognition framework with pose-based spatiotemporal features [J].
Agahian, Saeid ;
Negin, Farhood ;
Kose, Cemal .
ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2020, 23 (01) :196-203
[2]   Skeleton-Based Action Recognition With Gated Convolutional Neural Networks [J].
Cao, Congqi ;
Lan, Cuiling ;
Zhang, Yifan ;
Zeng, Wenjun ;
Lu, Hanqing ;
Zhang, Yanning .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (11) :3247-3257
[3]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[4]   Learning Multi-Granular Spatio-Temporal Graph Network for Skeleton-based Action Recognition [J].
Chen, Tailin ;
Zhou, Desen ;
Wang, Jian ;
Wang, Shidong ;
Guan, Yu ;
He, Xuming ;
Ding, Errui .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :4334-4342
[5]   Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition [J].
Chen, Yuxin ;
Zhang, Ziqi ;
Yuan, Chunfeng ;
Li, Bing ;
Deng, Ying ;
Hu, Weiming .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :13339-13348
[6]   Decoupling GCN with DropGraph Module for Skeleton-Based Action Recognition [J].
Cheng, Ke ;
Zhang, Yifan ;
Cao, Congqi ;
Shi, Lei ;
Cheng, Jian ;
Lu, Hanqing .
COMPUTER VISION - ECCV 2020, PT XXIV, 2020, 12369 :536-553
[7]   Skeleton-Based Action Recognition with Shift Graph Convolutional Network [J].
Cheng, Ke ;
Zhang, Yifan ;
He, Xiangyu ;
Chen, Weihan ;
Cheng, Jian ;
Lu, Hanqing .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :180-189
[8]   InfoGCN: Representation Learning for Human Skeleton-based Action Recognition [J].
Chi, Hyung-gun ;
Ha, Myoung Hoon ;
Chi, Seunggeun ;
Lee, Sang Wan ;
Huang, Qixing ;
Ramani, Karthik .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :20154-20164
[9]  
Cho S, 2020, IEEE WINT CONF APPL, P624, DOI [10.1109/wacv45572.2020.9093639, 10.1109/WACV45572.2020.9093639]
[10]  
Du Y, 2015, PROC CVPR IEEE, P1110, DOI 10.1109/CVPR.2015.7298714