A skeleton-based method and benchmark for real-time action classification of functional movement screen

被引:2
作者
Wenbo, Wang [1 ]
Chongwen, Wang [1 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
FMS; Functional movement screen; Action classification; Pose estimation; FMS action evaluation; FUNDAMENTAL MOVEMENTS;
D O I
10.1016/j.compeleceng.2022.108151
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
FMS stands for functional movement screen, which is a simple and effective method to evaluate athletes' basic sports ability. This paper proposed a real-time FMS action classification method. Correspondingly, a video set was constructed with two different perspectives including 8 testers, 13 independent testing processes and 360574 images. Moreover, a normalization algorithm and a result correction algorithm is proposed to improve the performance of the models and make the result sequence more continuously. Furthermore, it has the vital significance to FMS action evaluation. Finally, this paper analyzed the effectiveness of different models and compared their performance from the aspects of accuracy, continuity, running speed, etc. The experimental results show that our method can achieve 96.7% precision score and 94.7% recall score on the test set. And the average running speed of the system is over 30 FPS. All related data, benchmarks and codes will be uploaded on https://github.com/bobogo/FMS-evaluation-system.
引用
收藏
页数:12
相关论文
共 12 条
  • [1] 3D skeleton-based human action classification: A survey
    Lo Presti, Liliana
    La Cascia, Marco
    PATTERN RECOGNITION, 2016, 53 : 130 - 147
  • [2] Grading the Functional Movement Screen: A Comparison of Manual (Real-Time) and Objective Methods
    Whiteside, David
    Deneweth, Jessica M.
    Pohorence, Melissa A.
    Sandoval, Bo
    Russell, Jason R.
    McLean, Scott G.
    Zernicke, Ronald F.
    Goulet, Grant C.
    JOURNAL OF STRENGTH AND CONDITIONING RESEARCH, 2016, 30 (04) : 924 - 933
  • [3] A Real-time skeleton-based fall detection algorithm based on temporal convolutional networks and transformer encoder
    Yu, Xiaoqun
    Wang, Chenfeng
    Wu, Wenyu
    Xiong, Shuping
    PERVASIVE AND MOBILE COMPUTING, 2025, 107
  • [4] Skeleton-Based Violation Action Recognition Method for Safety Supervision in Operation Field of Distribution Network Based on Graph Convolutional Network
    Wang, Bo
    Ma, Fuqi
    Jia, Rong
    Luo, Peng
    Dong, Xuzhu
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2023, 9 (06): : 2179 - 2187
  • [5] Privacy-centric AI-based real-time storage-less edge computing approaches for passenger counting and action classification on public transport vehicles
    Abdelwahed, Youssef
    Khaled, Omar
    Elhamahmi, Abdelrahman
    Dessouki, Mariam
    Badawi, Karim
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 3116 - 3121
  • [6] Real-time human action prediction using pose estimation with attention-based LSTM network
    A. Bharathi
    Rigved Sanku
    M. Sridevi
    S. Manusubramanian
    S. Kumar Chandar
    Signal, Image and Video Processing, 2024, 18 : 3255 - 3264
  • [7] Real-time human action prediction using pose estimation with attention-based LSTM network
    Bharathi, A.
    Sanku, Rigved
    Sridevi, M.
    Manusubramanian, S.
    Chandar, S. Kumar
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (04) : 3255 - 3264
  • [8] A contactless method to measure real-time finger motion using depth-based pose estimation
    Zhu, Yean
    Lu, Wei
    Gan, Weihua
    Hou, Wensheng
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 131
  • [9] OPTIMIZATION METHOD OF REAL-TIME BASKETBALL DEFENSIVE STRATEGY BASED ON MOTION TRACKING TECHNOLOGY AND DEEP LEARNING
    Kang, Dandan
    REVISTA INTERNACIONAL DE MEDICINA Y CIENCIAS DE LA ACTIVIDAD FISICA Y DEL DEPORTE, 2024, 24 (94): : 515 - 531
  • [10] Real-time estimation method of target 3D pose based on multi-branch architecture
    Hong Y.
    Liu J.
    Luo S.
    Chen X.
    Li D.
    Zhang Q.
    Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology, 2024, 32 (04): : 336 - 345