A high-precision and efficient method for badminton action detection in sports using You Only Look Once with Hourglass Network

被引:0
|
作者
Yang, Wenwen [1 ]
Jiang, Minlan [1 ,2 ]
Fang, Xiaosheng [1 ]
Shi, Xiaowei [3 ]
Guo, Yizheng [1 ]
Al-qaness, Mohammed A. A. [1 ,2 ]
机构
[1] Zhejiang Normal Univ, Coll Phys & Elect Informat Engn, Jinhua 321004, Peoples R China
[2] Zhejiang Inst Optoelect, Jinhua 321004, Peoples R China
[3] Hangzhou Hikvis Digital Technol Co Ltd, Hangzhou 310000, Peoples R China
关键词
Badminton action detection; You only Look once; Hourglass network; Self-attention; Depth-wise Convolution; Action vectors; COMPUTER VISION;
D O I
10.1016/j.engappai.2024.109177
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Standardized striking movements are essential in badminton for enhancing player techniques and minimizing sports-related injuries. However, accurately detecting these movements against complex backgrounds while balancing precision and speed remains a significant challenge. To address this, we propose a novel model that synergizes You Only Look Once (YOLO) with the Hourglass Network (HGNet), called YOLO-HGNet, to enhance feature learning across multiple levels. By replacing traditional convolutional modules with Depth-Wise Convolution (DWConv), we achieve significant improvements in data processing efficiency. Additionally, our model incorporates a combination of self-attention and convolution mechanisms (ACmix) and FocalModulation to improve object localization and recognition accuracy in complex backgrounds. Our method leverages action vectors and machine learning techniques to accurately detect and classify six key badminton strokes: backhand push, backhand net shot, forehand clear, forehand push, forehand lift, and forehand net shot. Empirical evaluations demonstrate that our approach achieves a mean Average Precision (mAP) of 96.1% for detecting badminton player postures, outperforming existing advanced methods by at least 8.8%. Furthermore, our method achieves an average accuracy of 95.4% in classifying the six badminton strokes. These results underscore the superior capability of YOLO-HGNet for precise and efficient pose detection, recognition, and classification of badminton strokes, contributing significantly to advancements in sports science and athlete training methodologies.
引用
收藏
页数:13
相关论文
共 7 条
  • [1] SHOMY: Detection of Small Hazardous Objects using the You Only Look Once Algorithm
    Kim, Eunchan
    Lee, Jinyoung
    Jo, Hyunjik
    Na, Kwangtek
    Moon, Eunsook
    Gweon, Gahgene
    Yoo, Byungjoon
    Kyung, Yeunwoong
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (08): : 2688 - 2703
  • [2] Modified You Only Look Once Network Model for Enhanced Traffic Scene Detection Performance for Small Targets
    Shi, Lei
    Ren, Shuai
    Fan, Xing
    Wang, Ke
    Lin, Shan
    Liu, Zhanwen
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [3] Two-stage method based on the you only look once framework and image segmentation for crack detection in concrete structures
    Mayank Mishra
    Vipul Jain
    Saurabh Kumar Singh
    Damodar Maity
    Architecture, Structures and Construction, 2023, 3 (4): : 429 - 446
  • [4] Toward Reliable Post-Disaster Assessment: Advancing Building Damage Detection Using You Only Look Once Convolutional Neural Network and Satellite Imagery
    Gonzalez, Cesar Luis Moreno
    Montoya, German A.
    Garzon, Carlos Lozano
    MATHEMATICS, 2025, 13 (07)
  • [5] Teacher-Student Model Using Grounding DINO and You Only Look Once for Multi-Sensor-Based Object Detection
    Son, Jinhwan
    Jung, Heechul
    APPLIED SCIENCES-BASEL, 2024, 14 (06):
  • [6] Cloud-Based License Plate Recognition: A Comparative Approach Using You Only Look Once Versions 5, 7, 8, and 9 Object Detection
    Asaju, Christine Bukola
    Owolawi, Pius Adewale
    Tu, Chuling
    Van Wyk, Etienne
    INFORMATION, 2025, 16 (01)
  • [7] An Improved Moving Object Detection in a Wide Area Environment using Image Classification and Recognition by Comparing You Only Look Once (YOLO) Algorithm over Deformable Part Models (DPM) Algorithm.
    Srikar, M.
    Malathi, K.
    JOURNAL OF PHARMACEUTICAL NEGATIVE RESULTS, 2022, 13 (04) : 1701 - 1707