Research on Human Upper Limb Action Recognition Method Based on Multimodal Heterogeneous Spatial Temporal Graph Network

被引:0
|
作者
Ci, Zelin [1 ]
Ren, Huizhao [1 ]
Liu, Jinming [1 ]
Xie, Songyun [2 ]
Wang, Wendong [1 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Mech Engn, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Elect Informat Coll, Xian 710072, Peoples R China
[3] Sanhang Civil Mil Integrat Innovat Res Inst, Dongguan 523429, Peoples R China
来源
INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT X | 2025年 / 15210卷
关键词
Action Recognition; Graph Neural Network; Temporal Features; Heterogeneous Graph;
D O I
10.1007/978-981-96-0786-0_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional neural networks have been increasingly used in human action recognition because of their powerful ability to deal with spatial topological relations. Like human action, upper limb action contains spatial and temporal features. For temporal features, graph convolutional networks cannot extract well enough to realize the coupling of spatial-temporal features. This paper proposes a multimodal heterogeneous spatial-temporal network (MHST-GCN) model based on multimodal information. Firstly, the model introduces a temporal graph based on a hybrid sparsity strategy, which captures local and global temporal features in the sequence of human upper limb actions while ensuring computational efficiency. Then, a heterogeneous graph model is proposed for fusing the two modal information to enhance the robustness of the model. Finally, extensive experiments are conducted on two standard datasets, NTU-RGB+D, and a homemade upper limb action dataset. The experimental results demonstrate the effectiveness of the proposed method.
引用
收藏
页码:304 / 318
页数:15
相关论文
共 50 条
  • [11] Human action recognition based on enhanced data guidance and key node spatial temporal graph convolution
    Chengyu Zhang
    Jiuzhen Liang
    Xing Li
    Yunfei Xia
    Lan Di
    Zhenjie Hou
    Zhan Huan
    Multimedia Tools and Applications, 2022, 81 : 8349 - 8366
  • [12] Using BlazePose on Spatial Temporal Graph Convolutional Networks for Action Recognition
    Alsawadi, Motasem S.
    El-Kenawy, El-Sayed M.
    Rio, Miguel
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (01): : 19 - 36
  • [13] Human action recognition based on enhanced data guidance and key node spatial temporal graph convolution
    Zhang, Chengyu
    Liang, Jiuzhen
    Li, Xing
    Xia, Yunfei
    Di, Lan
    Hou, Zhenjie
    Huan, Zhan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (06) : 8349 - 8366
  • [14] Skeleton-based action recognition through attention guided heterogeneous graph neural network
    Li, Tianchen
    Geng, Pei
    Lu, Xuequan
    Li, Wanqing
    Lyu, Lei
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [15] STGNN-LMR: A Spatial–Temporal Graph Neural Network Approach Based on sEMG Lower Limb Motion Recognition
    Weifan Mao
    Bin Ma
    Zhao Li
    Jianxing Zhang
    Yizhou Lu
    Zhuting Yu
    Feng Zhang
    Journal of Bionic Engineering, 2024, 21 : 256 - 269
  • [16] Gait Recognition Algorithm based on Spatial-temporal Graph Neural Network
    Lan, TianYi
    Shi, ZongBin
    Wang, KeJun
    Yin, ChaoQun
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 55 - 58
  • [17] An Attention Enhanced Spatial-Temporal Graph Convolutional LSTM Network for Action Recognition in Karate
    Guo, Jianping
    Liu, Hong
    Li, Xi
    Xu, Dahong
    Zhang, Yihan
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [18] Spatial adaptive graph convolutional network for skeleton-based action recognition
    Zhu, Qilin
    Deng, Hongmin
    APPLIED INTELLIGENCE, 2023, 53 (14) : 17796 - 17808
  • [19] Spatial-Temporal Pyramid Graph Reasoning for Action Recognition
    Geng, Tiantian
    Zheng, Feng
    Hou, Xiaorong
    Lu, Ke
    Qi, Guo-Jun
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 5484 - 5497
  • [20] Spatial adaptive graph convolutional network for skeleton-based action recognition
    Qilin Zhu
    Hongmin Deng
    Applied Intelligence, 2023, 53 : 17796 - 17808