H 2 GCN: A hybrid hypergraph convolution network for skeleton-based action recognition

被引:1
|
作者
Shao, Yiming [1 ]
Mao, Lintao [1 ]
Ye, Leixiong [1 ]
Li, Jincheng [1 ]
Yang, Ping [1 ]
Ji, Chengtao [2 ]
Wu, Zizhao [3 ]
机构
[1] Hangzhou Dianzi Univ, Hangzhou, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Suchou, Peoples R China
[3] Room 322,Build 10, Hangzhou 310018, Zhejiang, Peoples R China
关键词
Action recognition; Hypergraph convolution network; Spatio-temporal modeling;
D O I
10.1016/j.jksuci.2024.102072
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent GCN-based works have achieved remarkable results for skeleton -based human action recognition. Nevertheless, while existing approaches extensively investigate pairwise joint relationships, only a limited number of models explore the intricate, high -order relationships among multiple joints. In this paper, we propose a novel hypergraph convolution method that represents the relationships among multiple joints with hyperedges, and dynamically refines the height -order relationship between hyperedges in the spatial, temporal, and channel dimensions. Specifically, our method initiates with a temporal -channel refinement hypergraph convolutional network, dynamically learning temporal and channel topologies in a data -dependent manner, which facilitates the capture of non-physical structural information inherent in the human body. Furthermore, to model various inter -joint relationships across spatio-temporal dimensions, we propose a spatio-temporal hypergraph joint module, which aims to encapsulate the dynamic spatial-temporal characteristics of the human body. Through the integration of these modules, our proposed model achieves state-of-the-art performance on RGB+D 60 and NTU RGB+D 120 datasets.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] DD-GCN: Directed Diffusion Graph Convolutional Network for Skeleton-based Human Action Recognition
    Li, Chang
    Huang, Qian
    Mao, Yingchi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 786 - 791
  • [32] Adaptive Multi-Scale Difference Graph Convolution Network for Skeleton-Based Action Recognition
    Wang, Xiaojuan
    Gan, Ziliang
    Jin, Lei
    Xiao, Yabo
    He, Mingshu
    ELECTRONICS, 2023, 12 (13)
  • [33] Temporal channel reconfiguration multi-graph convolution network for skeleton-based action recognition
    Lei, Siyue
    Tang, Bin
    Chen, Yanhua
    Zhao, Mingfu
    Xu, Yifei
    Long, Zourong
    IET COMPUTER VISION, 2024, 18 (06) : 813 - 825
  • [34] Fully Attentional Network for Skeleton-Based Action Recognition
    Liu, Caifeng
    Zhou, Hongcheng
    IEEE ACCESS, 2023, 11 : 20478 - 20485
  • [35] Auto-Learning-GCN: An Ingenious Framework for Skeleton-Based Action Recognition
    Xin, Wentian
    Liu, Yi
    Liu, Ruyi
    Miao, Qiguang
    Shi, Cheng
    Pun, Chi-Man
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 29 - 42
  • [36] Skeleton-based Action Recognition with Graph Involution Network
    Tang, Zhihao
    Xia, Hailun
    Gao, Xinkai
    Gao, Feng
    Feng, Chunyan
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3348 - 3354
  • [37] Convolutional relation network for skeleton-based action recognition
    Zhu, Jiagang
    Zou, Wei
    Zhu, Zheng
    Hu, Yiming
    NEUROCOMPUTING, 2019, 370 : 109 - 117
  • [38] A Spatiotemporal Fusion Network For Skeleton-Based Action Recognition
    Bao, Wenxia
    Wang, Junyi
    Yang, Xianjun
    Chen, Hemu
    2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, : 347 - 352
  • [39] Dynamic Semantic-Based Spatial Graph Convolution Network for Skeleton-Based Human Action Recognition
    Xie, Jianyang
    Meng, Yanda
    Zhao, Yitian
    Anh Nguyen
    Yang, Xiaoyun
    Zheng, Yalin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6225 - 6233
  • [40] Attentional weighting strategy-based dynamic GCN for skeleton-based action recognition
    Hu, Kai
    Jin, Junlan
    Shen, Chaowen
    Xia, Min
    Weng, Liguo
    MULTIMEDIA SYSTEMS, 2023, 29 (04) : 1941 - 1954