Human centric attention with deep multiscale feature fusion framework for activity recognition in Internet of Medical Things

被引:9
作者
Hussain, Altaf [1 ]
Khan, Samee Ullah [1 ]
Rida, Imad [2 ]
Khan, Noman [1 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Seoul 143747, South Korea
[2] Univ Technol Compiegne, Ctr Rech Royallieu, Lab Biomecan & Bioingn, Compiegne, France
基金
新加坡国家研究基金会;
关键词
Human activity recognition; Multiscale feature fusion; Healthcare activity recognition; Internet of medical things; Information fusion; Artificial intelligence; Surveillance system; FALL DETECTION; VIDEO SURVEILLANCE; LSTM; CNN;
D O I
10.1016/j.inffus.2023.102211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in the Internet of Medical Things (IoMT) have revolutionized the healthcare sector, making it an active research area in the academic and industrial sectors. Following these advances, an automatic Human Activity Recognition (HAR) is now integrated into the IoMT, facilitating remote patient monitoring systems for smart healthcare. However, implementing HAR via computer vision is intricate due to complex spatiotemporal patterns, single stream fusion, and clutter backgrounds. Mainstream approaches practice pre-trained CNN model, which extract non-salient features due to their generalized weight optimization and limited discriminative feature fusion. In addition, their sequential models have inadequate performance in complex scenarios due to the vanishing gradients encountered during backpropagation across multiple layers. In response to these challenges, we propose a multiscale feature fusion framework for both indoor and outdoor environments to enhance HAR in healthcare monitoring systems, which is mainly composed of two stages: First, the proposed Human Centric Attentional Fusion (HCAF) network is fused with the intermediate convolutional feature of lightweight MobileNetV3 backbone to enriches spatial learning capabilities for accurate HAR. Next, a Deep Multiscale Features Fusion (DMFF) network is proposed that enhanced the long-range temporal dependencies by redesigning the traditional bidirectional LSTM network into a residual fashion followed by Sequential Multihead Attention (SMA) to eliminate non-relevant information and optimized spatiotemporal feature vectors. The performance of the proposed fusion model is evaluated on benchmark healthcare and general activity datasets. In the healthcare, we used Multiple Camera Fall and UR Fall Detection datasets that achieved 99.941% and 100% accuracy. Despite this, our fusion strategy is rigorously evaluated over three challenging general HAR datasets, including HMDB51, UCF101, and UCF50, demonstrating 74.942%, 97.337%, and 96.156% superior performance compared to Stateof-The-Art (SOTA) methods. The run time analysis shows that the proposed method is 2x times faster than the existing methods.
引用
收藏
页数:15
相关论文
共 84 条
  • [81] Human action recognition using convolutional LSTM and fully-connected LSTM with different attentions
    Zhang, Zufan
    Lv, Zongming
    Gan, Chenquan
    Zhu, Qingyi
    [J]. NEUROCOMPUTING, 2020, 410 (410) : 304 - 316
  • [82] Pooling the Convolutional Layers in Deep ConvNets for Video Action Recognition
    Zhao, Shichao
    Liu, Yanbin
    Han, Yahong
    Hong, Richang
    Hu, Qinghua
    Tian, Qi
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (08) : 1839 - 1849
  • [83] MiCT: Mixed 3D/2D Convolutional Tube for Human Action Recognition
    Zhou, Yizhou
    Sun, Xiaoyan
    Zha, Zheng-Jun
    Zeng, Wenjun
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 449 - 458
  • [84] Spatial and temporal saliency based four-stream network with multi-task learning for action recognition
    Zong, Ming
    Wang, Ruili
    Ma, Yujun
    Ji, Wanting
    [J]. APPLIED SOFT COMPUTING, 2023, 132