A Data Augmentation Method for Skeleton-Based Action Recognition with Relative Features

被引:7
|
作者
Chen, Junjie [1 ]
Yang, Wei [2 ]
Liu, Chenqi [3 ]
Yao, Leiyue [1 ]
机构
[1] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Jiangxi, Peoples R China
[2] Jiangxi Univ Technol, Ctr Collaborat & Innovat, Nanchang 330031, Jiangxi, Peoples R China
[3] Nanchang Univ, Network Informat Ctr, Nanchang 330031, Jiangxi, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 23期
基金
中国国家自然科学基金;
关键词
skeleton-based human action recognition; relative coordinate; data augmentation; motion image;
D O I
10.3390/app112311481
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, skeleton-based human action recognition (HAR) approaches using convolutional neural network (CNN) models have made tremendous progress in computer vision applications. However, using relative features to depict human actions, in addition to preventing overfitting when the CNN model is trained on a few samples, is still a challenge. In this paper, a new motion image is introduced to transform spatial-temporal motion information into image-based representations. For each skeleton sequence, three relative features are extracted to describe human actions. The three relative features are consisted of relative coordinates, immediate displacement, and immediate motion orientation. In particular, the relative coordinates introduced in our paper not only depict the spatial relations of human skeleton joints but also provide long-term temporal information. To address the problem of small sample sizes, a data augmentation strategy consisting of three simple but effective data augmentation methods is proposed to expand the training samples. Because the generated color images are small in size, a shallow CNN model is suitable to extract the deep features of the generated motion images. Two small-scale but challenging skeleton datasets were used to evaluate the method, scoring 96.59% and 97.48% on the Florence 3D Actions dataset and UTkinect-Action 3D dataset, respectively. The results show that the proposed method achieved a competitive performance compared with the state-of-the-art methods. Furthermore, the augmentation strategy proposed in this paper effectively solves the overfitting problem and can be widely adopted in skeleton-based action recognition.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Joint-oriented Features for Skeleton-based Action Recognition
    Liao, Li-Chi
    Yang, Yu-Huan
    Fu, Li-Chen
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 1154 - 1159
  • [2] A Comparison of Machine Learning Models with Data Augmentation Techniques for Skeleton-based Human Action Recognition
    Xin, Chu
    Kim, Seokhwan
    Park, Kyoung Shin
    14TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, BCB 2023, 2023,
  • [3] Hybrid features for skeleton-based action recognition based on network fusion
    Chen, Zhangmeng
    Pan, Junjun
    Yang, Xiaosong
    Qin, Hong
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2020, 31 (4-5)
  • [4] Revisiting Skeleton-based Action Recognition
    Duan, Haodong
    Zhao, Yue
    Chen, Kai
    Lin, Dahua
    Dai, Bo
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2959 - 2968
  • [5] SPATIAL-TEMPORAL DATA AUGMENTATION BASED ON LSTM AUTOENCODER NETWORK FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Tu, Juanhui
    Liu, Hong
    Meng, Fanyang
    Liu, Mengyuan
    Ding, Runwei
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 3478 - 3482
  • [6] Enhancing Skeleton-Based Action Recognition in Real-World Scenarios through Realistic Data Augmentation
    Cormier, Mickael
    Schmid, Yannik
    Beyerer, Juergen
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 300 - 309
  • [7] Skeleton-Based Data Augmentation for Sign Language Recognition Using Adversarial Learning
    Nakamura, Yuriya
    Jing, Lei
    IEEE ACCESS, 2025, 13 : 15290 - 15300
  • [8] RELATIONAL NETWORK FOR SKELETON-BASED ACTION RECOGNITION
    Zheng, Wu
    Li, Lin
    Zhang, Zhaoxiang
    Huang, Yan
    Wang, Liang
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 826 - 831
  • [9] Sample Fusion Network: An End-to-End Data Augmentation Network for Skeleton-Based Human Action Recognition
    Meng, Fanyang
    Liu, Hong
    Liang, Yongsheng
    Tu, Juanhui
    Liu, Mengyuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (11) : 5281 - 5295
  • [10] SpatioTemporal focus for skeleton-based action recognition
    Wu, Liyu
    Zhang, Can
    Zou, Yuexian
    PATTERN RECOGNITION, 2023, 136