Human centric attention with deep multiscale feature fusion framework for activity recognition in Internet of Medical Things

被引:9
作者
Hussain, Altaf [1 ]
Khan, Samee Ullah [1 ]
Rida, Imad [2 ]
Khan, Noman [1 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Seoul 143747, South Korea
[2] Univ Technol Compiegne, Ctr Rech Royallieu, Lab Biomecan & Bioingn, Compiegne, France
基金
新加坡国家研究基金会;
关键词
Human activity recognition; Multiscale feature fusion; Healthcare activity recognition; Internet of medical things; Information fusion; Artificial intelligence; Surveillance system; FALL DETECTION; VIDEO SURVEILLANCE; LSTM; CNN;
D O I
10.1016/j.inffus.2023.102211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in the Internet of Medical Things (IoMT) have revolutionized the healthcare sector, making it an active research area in the academic and industrial sectors. Following these advances, an automatic Human Activity Recognition (HAR) is now integrated into the IoMT, facilitating remote patient monitoring systems for smart healthcare. However, implementing HAR via computer vision is intricate due to complex spatiotemporal patterns, single stream fusion, and clutter backgrounds. Mainstream approaches practice pre-trained CNN model, which extract non-salient features due to their generalized weight optimization and limited discriminative feature fusion. In addition, their sequential models have inadequate performance in complex scenarios due to the vanishing gradients encountered during backpropagation across multiple layers. In response to these challenges, we propose a multiscale feature fusion framework for both indoor and outdoor environments to enhance HAR in healthcare monitoring systems, which is mainly composed of two stages: First, the proposed Human Centric Attentional Fusion (HCAF) network is fused with the intermediate convolutional feature of lightweight MobileNetV3 backbone to enriches spatial learning capabilities for accurate HAR. Next, a Deep Multiscale Features Fusion (DMFF) network is proposed that enhanced the long-range temporal dependencies by redesigning the traditional bidirectional LSTM network into a residual fashion followed by Sequential Multihead Attention (SMA) to eliminate non-relevant information and optimized spatiotemporal feature vectors. The performance of the proposed fusion model is evaluated on benchmark healthcare and general activity datasets. In the healthcare, we used Multiple Camera Fall and UR Fall Detection datasets that achieved 99.941% and 100% accuracy. Despite this, our fusion strategy is rigorously evaluated over three challenging general HAR datasets, including HMDB51, UCF101, and UCF50, demonstrating 74.942%, 97.337%, and 96.156% superior performance compared to Stateof-The-Art (SOTA) methods. The run time analysis shows that the proposed method is 2x times faster than the existing methods.
引用
收藏
页数:15
相关论文
共 84 条
  • [41] Real-time gait biometrics for surveillance applications: A review
    Parashar, Anubha
    Parashar, Apoorva
    Abate, Andrea F.
    Shekhawat, Rajveer Singh
    Rida, Imad
    [J]. IMAGE AND VISION COMPUTING, 2023, 138
  • [42] Data preprocessing and feature selection techniques in gait recognition: A comparative study of machine learning and deep learning approaches
    Parashar, Anubha
    Parashar, Apoorva
    Ding, Weiping
    Shabaz, Mohammad
    Rida, Imad
    [J]. PATTERN RECOGNITION LETTERS, 2023, 172 : 65 - 73
  • [43] Deep learning pipelines for recognition of gait biometrics with covariates: a comprehensive review
    Parashar, Anubha
    Parashar, Apoorva
    Ding, Weiping
    Shekhawat, Rajveer S. S.
    Rida, Imad
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (08) : 8889 - 8953
  • [44] Intra-class variations with deep learning-based gait analysis: A comprehensive survey of covariates and methods
    Parashar, Anubha
    Shekhawat, Rajveer Singh
    Ding, Weiping
    Rida, Imad
    [J]. NEUROCOMPUTING, 2022, 505 : 315 - 338
  • [45] Park J, 2018, Arxiv, DOI [arXiv:1807.06514, DOI 10.48550/ARXIV.1807.06514]
  • [46] FL-FD: Federated learning-based fall detection with multimodal data fusion
    Qi, Pian
    Chiaro, Diletta
    Piccialli, Francesco
    [J]. INFORMATION FUSION, 2023, 99
  • [47] Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks
    Qiu, Zhaofan
    Yao, Ting
    Mei, Tao
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5534 - 5542
  • [48] Recognizing 50 human action categories of web videos
    Reddy, Kishore K.
    Shah, Mubarak
    [J]. MACHINE VISION AND APPLICATIONS, 2013, 24 (05) : 971 - 981
  • [49] Robust gait recognition: a comprehensive survey
    Rida, Imad
    Almaadeed, Noor
    Almaadeed, Somaya
    [J]. IET BIOMETRICS, 2019, 8 (01) : 14 - 28
  • [50] Palmprint recognition with an efficient data driven ensemble classifier
    Rida, Imad
    Herault, Romain
    Marcialis, Gian Luca
    Gasso, Gilles
    [J]. PATTERN RECOGNITION LETTERS, 2019, 126 : 21 - 30