A Real-time skeleton-based fall detection algorithm based on temporal convolutional networks and transformer encoder

被引:0
|
作者
Yu, Xiaoqun [1 ]
Wang, Chenfeng [1 ]
Wu, Wenyu [1 ]
Xiong, Shuping [2 ]
机构
[1] Southeast Univ, Sch Mech Engn, Dept Mech & Ind Design, Nanjing 211189, Peoples R China
[2] Korea Adv Inst Sci & Technol KAIST, Dept Ind & Syst Engn, Daejeon 34141, South Korea
基金
新加坡国家研究基金会;
关键词
Aging; Fall detection; Pose estimation; Temporal convolutional network; Transformer; Edge computing; RECOGNITION;
D O I
10.1016/j.pmcj.2025.102016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As the population of older individuals living independently rises, coupled with the heightened risk of falls among this demographic, the need for automatic fall detection systems becomes increasingly urgent to ensure timely medical intervention. Computer vision (CV)-based methodologies have emerged as a preferred approach among researchers due to their contactless and pervasive nature. However, existing CV-based solutions often suffer from either poor robustness or prohibitively high computational requirements, impeding their practical implementation in elderly living environments. To address these challenges, we introduce TCNTE, a real-time skeleton-based fall detection algorithm that combines Temporal Convolutional Network (TCN) with Transformer Encoder (TE). We also successfully mitigate the severe class imbalance issue by implementing weighted focal loss. Cross-validation on multiple publicly available vision-based fall datasets demonstrates TCNTE's superiority over individual models (TCN and TE) and existing state-of-the-art fall detection algorithms, achieving remarkable accuracies (front view of UPFall: 99.58 %; side view of UP-Fall: 98.75 %; Le2i: 97.01 %; GMDCSA-24: 92.99 %) alongside practical viability. Visualizations using t-distributed stochastic neighbor embedding (t-SNE) reveal TCNTE's superior separation margin and cohesive clustering between fall and non-fall classes compared to TCN and TE. Crucially, TCNTE is designed for pervasive deployment in mobile and resource-constrained environments. Integrated with YOLOv8 pose estimation and BoT-SORT human tracking, the algorithm operates on NVIDIA Jetson Orin NX edge device, achieving an average frame rate of 19 fps for single-person and 17 fps for two-person scenarios. With its validated accuracy and impressive real-time performance, TCNTE holds significant promise for practical fall detection applications in older adult care settings.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Skeleton-Based Fall Detection with Multiple Inertial Sensors Using Spatial-Temporal Graph Convolutional Networks
    Yan, Jianjun
    Wang, Xueqiang
    Shi, Jiangtao
    Hu, Shuai
    SENSORS, 2023, 23 (04)
  • [2] Deformable graph convolutional transformer for skeleton-based action recognition
    Chen, Shuo
    Xu, Ke
    Zhu, Bo
    Jiang, Xinghao
    Sun, Tanfeng
    APPLIED INTELLIGENCE, 2023, 53 (12) : 15390 - 15406
  • [3] Real-time fall attitude detection algorithm based on iRMB
    Xie, Xudong
    Xu, Bing
    Chen, Zhifei
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [4] Deformable graph convolutional transformer for skeleton-based action recognition
    Shuo Chen
    Ke Xu
    Bo Zhu
    Xinghao Jiang
    Tanfeng Sun
    Applied Intelligence, 2023, 53 : 15390 - 15406
  • [5] Log Anomaly Detection Method Based on Transformer and Temporal Convolutional Networks
    Liao, Niandong
    Liu, Zihan
    IEEE ACCESS, 2025, 13 : 68547 - 68560
  • [6] Skeleton-Based Detection of Abnormalities in Human Actions Using Graph Convolutional Networks
    Yu, Bruce X. B.
    Liu, Yan
    Chan, Keith C. C.
    2020 SECOND INTERNATIONAL CONFERENCE ON TRANSDISCIPLINARY AI (TRANSAI 2020), 2020, : 131 - 137
  • [7] Fast Temporal Graph Convolutional Model for Skeleton-Based Action Recognition
    Nan, Mihai
    Florea, Adina Magda
    SENSORS, 2022, 22 (19)
  • [8] Skeleton Based Fall Detection with Convolutional Neural Network
    Wu, Jun
    Wang, Ke
    Cheng, Baoping
    Li, Ruifeng
    Chen, Changfan
    Zhou, Tianxiang
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 5266 - 5271
  • [9] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Xiaoyan Tian
    Ye Jin
    Zhao Zhang
    Peng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 44273 - 44297
  • [10] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Tian, Xiaoyan
    Jin, Ye
    Zhang, Zhao
    Liu, Peng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44273 - 44297