A General Multistage Deep Learning Framework for Sensor-Based Human Activity Recognition Under Bounded Computational Budget

被引:0
作者
Wang, Xing [1 ]
Zhang, Lei [1 ]
Cheng, Dongzhou [1 ]
Tang, Yin [2 ]
Wang, Shuoyuan [3 ]
Wu, Hao [4 ]
Song, Aiguo [5 ]
机构
[1] Nanjing Normal Univ, Sch Elect & Automat Engn, Nanjing 210023, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[3] Southern Univ Sci & Technol, Dept Stat & Data Sci, Shenzhen 518055, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Yunnan, Peoples R China
[5] Southeast Univ, Sch Instrument Sci & Engn, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Human activity recognition; Computational efficiency; Accuracy; Proposals; Reinforcement learning; Predictive models; Power demand; Heuristic algorithms; Data models; early exit; human activity recognition (HAR); reinforcement learning; sensors; sequential decision;
D O I
10.1109/TIM.2024.3481549
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, sliding windows have been widely employed for sensor-based human activity recognition (HAR) due to their implementational simplicity. In this article, inspired by the fact that not all time intervals in a window are activity-relevant, we propose a novel multistage HAR framework named MS-HAR by implementing a sequential decision procedure to progressively process a sequence of relatively small intervals, i.e., reduced input, which is automatically cropped from the original window with reinforcement learning. Such a design naturally facilitates dynamic inference at runtime, which may be terminated at an arbitrary time once the network obtains sufficiently high confidence about its current prediction. Compared to most existing works that directly handle the whole window, our method allows for very precisely controlling the computational budget online by setting confidence thresholds, which forces the network to spend more computation on a "difficult" activity while spending less computation on an "easy" activity under a finite computational budget. Extensive experiments on four benchmark HAR datasets consisting of WISMD, PAMAP2, USC-HAD, and one weakly labeled dataset demonstrate that our method is considerably more flexible and efficient than the competitive baselines. Particularly, our proposed framework is general since it is compatible with most mainstream backbone networks.
引用
收藏
页数:15
相关论文
共 46 条
  • [1] Hierarchical Signal Segmentation and Classification for Accurate Activity Recognition
    Akbari, Ali
    Wu, Jian
    Grimsley, Reese
    Jafari, Roozbeh
    [J]. PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1596 - 1605
  • [2] Multi-ResAtt: Multilevel Residual Network With Attention for Human Activity Recognition Using Wearable Sensors
    Al-qaness, Mohammed A. A.
    Dahou, Abdelghani
    Abd Elaziz, Mohamed
    Helmi, A. M.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (01) : 144 - 152
  • [3] Detection of eating and drinking arm gestures using inertial body-worn sensors
    Amft, O
    Junker, H
    Tröster, G
    [J]. NINTH IEEE INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, PROCEEDINGS, 2005, : 160 - 163
  • [4] Window Size Impact in Human Activity Recognition
    Banos, Oresti
    Galvez, Juan-Manuel
    Damas, Miguel
    Pomares, Hector
    Rojas, Ignacio
    [J]. SENSORS, 2014, 14 (04) : 6474 - 6499
  • [5] Human Activity Recognition Based on Dynamic Active Learning
    Bi, Haixia
    Perello-Nieto, Miquel
    Santos-Rodriguez, Raul
    Flach, Peter
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (04) : 922 - 934
  • [6] A Tutorial on Human Activity Recognition Using Body-Worn Inertial Sensors
    Bulling, Andreas
    Blanke, Ulf
    Schiele, Bernt
    [J]. ACM COMPUTING SURVEYS, 2014, 46 (03)
  • [7] Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities
    Chen, Kaixuan
    Zhang, Dalin
    Yao, Lina
    Guo, Bin
    Yu, Zhiwen
    Liu, Yunhao
    [J]. ACM COMPUTING SURVEYS, 2021, 54 (04)
  • [8] Smartphone Sensor-Based Human Activity Recognition Using Feature Fusion and Maximum Full a Posteriori
    Chen, Zhenghua
    Jiang, Chaoyang
    Xiang, Shili
    Ding, Jie
    Wu, Min
    Li, Xiaoli
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (07) : 3992 - 4001
  • [9] Sensor-based and vision-based human activity recognition: A comprehensive survey
    Dang, L. Minh
    Min, Kyungbok
    Wang, Hanxiang
    Piran, Md. Jalil
    Lee, Cheol Hee
    Moon, Hyeonjoon
    [J]. PATTERN RECOGNITION, 2020, 108 (108)
  • [10] Time Series Change Point Detection with Self-Supervised Contrastive Predictive Coding
    Deldari, Shohreh
    Smith, Daniel, V
    Xue, Hao
    Salim, Flora D.
    [J]. PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 3124 - 3135