A lightweight graph convolutional network for skeleton-based action recognition

被引:3
|
作者
Pham, Dinh-Tan [1 ,3 ]
Pham, Quang-Tien [2 ]
Nguyen, Tien-Thanh [2 ]
Le, Thi-Lan [2 ,3 ]
Vu, Hai [2 ,3 ]
机构
[1] Hanoi Univ Min & Geol, Fac IT, Hanoi, Vietnam
[2] Hanoi Univ Sci & Technol, Sch Elect & Elect Engn SEEE, Hanoi, Vietnam
[3] Hanoi Univ Sci & Technol, MICA Int Res Inst, Comp Vis Dept, Hanoi, Vietnam
关键词
Human action recognition; Graph convolution network; Skeleton data; Informative joint selection;
D O I
10.1007/s11042-022-13298-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human action recognition has been an attractive research topic in recent years due to its wide range of applications. Among existing methods, the Graph Convolutional Network achieves remarkable results by exploring the graph nature of skeleton data in both spatial and temporal domains. Noise from the pose estimation error is an inherent issue that could seriously degrade action recognition performance. Existing graph-based methods mainly focus on improving recognition accuracy, whereas low-complexity models are required for application development on devices with limited computation capacity. In this paper, a lightweight model is proposed by pruning layers, adding Feature Fusion and Preset Joint Subset Selection modules. The proposed model takes advantages of the recent Graph-based convolution networks (GCN) and selecting informative joints. Two graph topologies are defined for the selected joints. Extensive experiments are implemented on public datasets to evaluate the performance of the proposed method. Experimental results show that the method outperforms the baselines on the datasets with serious noise in skeleton data. In contrast, the number of parameters in the proposed method is 5.6 times less than the baseline. The proposed lightweight models therefore offer feasible solutions for developing practical applications.
引用
收藏
页码:3055 / 3079
页数:25
相关论文
共 50 条
  • [41] Occluded Skeleton-Based Human Action Recognition with Dual Inhibition Training
    Chen, Zhenjie
    Wang, Hongsong
    Gui, Jie
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2625 - 2634
  • [42] Spatio-temporal neural network with handcrafted features for skeleton-based action recognition
    Nan, Mihai
    Trascau, Mihai
    Florea, Adina-Magda
    NEURAL COMPUTING & APPLICATIONS, 2024, : 9221 - 9243
  • [43] Body RFID Skeleton-Based Human Activity Recognition Using Graph Convolution Neural Network
    Wang, Ziyi
    Chen, Yihong
    Zheng, Hao
    Liu, Meng
    Huang, Ping
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (06) : 7301 - 7317
  • [44] Campus violence action recognition based on lightweight graph convolution network
    Li Qi
    Deng Yao-hui
    Wang Jiao
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2022, 37 (04) : 530 - 538
  • [45] Direction-guided two-stream convolutional neural networks for skeleton-based action recognition
    Su, Benyue
    Zhang, Peng
    Sun, Manzhen
    Sheng, Min
    SOFT COMPUTING, 2023, 27 (16) : 11833 - 11842
  • [46] Direction-guided two-stream convolutional neural networks for skeleton-based action recognition
    Benyue Su
    Peng Zhang
    Manzhen Sun
    Min Sheng
    Soft Computing, 2023, 27 : 11833 - 11842
  • [47] Transformer for Skeleton-based action recognition: A review of recent advances
    Xin, Wentian
    Liu, Ruyi
    Liu, Yi
    Chen, Yu
    Yu, Wenxin
    Miao, Qiguang
    NEUROCOMPUTING, 2023, 537 : 164 - 186
  • [48] Static graph convolution with learned temporal and channel-wise graph topology generation for skeleton-based action recognition
    Li, Chuankun
    Li, Shuai
    Gao, Yanbo
    Zhou, Lijuan
    Li, Wanqing
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 244
  • [49] Unsupervised Temporal Adaptation in Skeleton-Based Human Action Recognition
    Tian, Haitao
    Payeur, Pierre
    ALGORITHMS, 2024, 17 (12)
  • [50] Deep Learning Techniques for Skeleton-Based Action Recognition: A Survey
    Pham, Dinh-Tan
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS-ICCSA 2024, PT II, 2024, 14814 : 427 - 435