Hybrid features for skeleton-based action recognition based on network fusion

被引:4
|
作者
Chen, Zhangmeng [1 ,2 ]
Pan, Junjun [1 ,2 ]
Yang, Xiaosong [3 ]
Qin, Hong [4 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Bournemouth Univ, Fac Media & Commun, Poole, Dorset, England
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 北京市自然科学基金; 美国国家科学基金会; 国家重点研发计划;
关键词
action recognition; CNN; human skeleton; hybrid features; LSTM; multistream neural network;
D O I
10.1002/cav.1952
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In recent years, the topic of skeleton-based human action recognition has attracted significant attention from researchers and practitioners in graphics, vision, animation, and virtual environments. The most fundamental issue is how to learn an effective and accurate representation from spatiotemporal action sequences towards improved performance, and this article aims to address the aforementioned challenge. In particular, we design a novel method of hybrid features' extraction based on the construction of multistream networks and their organic fusion. First, we train a convolution neural networks (CNN) model to learn CNN-based features with the raw skeleton coordinates and their temporal differences serving as input signals. The attention mechanism is injected into the CNN model to weigh more effective and important information. Then, we employ long short-term memory (LSTM) to obtain long-term temporal features from action sequences. Finally, we generate the hybrid features by fusing the CNN and LSTM networks, and we classify action types with the hybrid features. The extensive experiments are performed on several large-scale publically available databases, and promising results demonstrate the efficacy and effectiveness of our proposed framework.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A lightweight graph convolutional network for skeleton-based action recognition
    Pham, Dinh-Tan
    Pham, Quang-Tien
    Nguyen, Tien-Thanh
    Le, Thi-Lan
    Vu, Hai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (02) : 3055 - 3079
  • [22] Shuffle Graph Convolutional Network for Skeleton-Based Action Recognition
    Yu, Qiwei
    Dai, Yaping
    Hirota, Kaoru
    Shao, Shuai
    Dai, Wei
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (05) : 790 - 800
  • [23] A GCN and Transformer complementary network for skeleton-based action recognition
    Xiang, Xuezhi
    Li, Xiaoheng
    Liu, Xuzhao
    Qiao, Yulong
    El Saddik, Abdulmotaleb
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [24] Sequence Segmentation Attention Network for Skeleton-Based Action Recognition
    Zhang, Yujie
    Cai, Haibin
    ELECTRONICS, 2023, 12 (07)
  • [25] Feedback Graph Convolutional Network for Skeleton-Based Action Recognition
    Yang, Hao
    Yan, Dan
    Zhang, Li
    Sun, Yunda
    Li, Dong
    Maybank, Stephen J.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 164 - 175
  • [26] JOINTS RELATION INFERENCE NETWORK FOR SKELETON-BASED ACTION RECOGNITION
    Ye, Fanfan
    Tang, Huiming
    Wang, Xuwen
    Liang, Xiao
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 16 - 20
  • [27] Hierarchical Graph Convolutional Network for Skeleton-Based Action Recognition
    Huang, Linjiang
    Huang, Yan
    Ouyang, Wanli
    Wang, Liang
    IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 93 - 102
  • [28] A three-stream fusion network for 3D skeleton-based action recognition
    Fang, Ming
    Liu, Qi
    Ren, Jianping
    Li, Jie
    Du, Xinning
    Liu, Shuhua
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [29] Participants-based Synchronous Optimization Network for skeleton-based action recognition
    Zhuang, Danfeng
    Jiang, Min
    Kong, Jun
    PATTERN RECOGNITION LETTERS, 2023, 176 : 182 - 188
  • [30] Participants-based Synchronous Optimization Network for skeleton-based action recognition
    Zhuang, Danfeng
    Jiang, Min
    Kong, Jun
    PATTERN RECOGNITION LETTERS, 2023, 176 : 182 - 188