A General Multistage Deep Learning Framework for Sensor-Based Human Activity Recognition Under Bounded Computational Budget

被引:0
作者
Wang, Xing [1 ]
Zhang, Lei [1 ]
Cheng, Dongzhou [1 ]
Tang, Yin [2 ]
Wang, Shuoyuan [3 ]
Wu, Hao [4 ]
Song, Aiguo [5 ]
机构
[1] Nanjing Normal Univ, Sch Elect & Automat Engn, Nanjing 210023, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[3] Southern Univ Sci & Technol, Dept Stat & Data Sci, Shenzhen 518055, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Yunnan, Peoples R China
[5] Southeast Univ, Sch Instrument Sci & Engn, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Human activity recognition; Computational efficiency; Accuracy; Proposals; Reinforcement learning; Predictive models; Power demand; Heuristic algorithms; Data models; early exit; human activity recognition (HAR); reinforcement learning; sensors; sequential decision;
D O I
10.1109/TIM.2024.3481549
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, sliding windows have been widely employed for sensor-based human activity recognition (HAR) due to their implementational simplicity. In this article, inspired by the fact that not all time intervals in a window are activity-relevant, we propose a novel multistage HAR framework named MS-HAR by implementing a sequential decision procedure to progressively process a sequence of relatively small intervals, i.e., reduced input, which is automatically cropped from the original window with reinforcement learning. Such a design naturally facilitates dynamic inference at runtime, which may be terminated at an arbitrary time once the network obtains sufficiently high confidence about its current prediction. Compared to most existing works that directly handle the whole window, our method allows for very precisely controlling the computational budget online by setting confidence thresholds, which forces the network to spend more computation on a "difficult" activity while spending less computation on an "easy" activity under a finite computational budget. Extensive experiments on four benchmark HAR datasets consisting of WISMD, PAMAP2, USC-HAD, and one weakly labeled dataset demonstrate that our method is considerably more flexible and efficient than the competitive baselines. Particularly, our proposed framework is general since it is compatible with most mainstream backbone networks.
引用
收藏
页数:15
相关论文
共 46 条
[31]   CapsGaNet: Deep Neural Network Based on Capsule and GRU for Human Activity Recognition [J].
Sun, Xiaojie ;
Xu, Hongji ;
Dong, Zheng ;
Shi, Leixin ;
Liu, Qiang ;
Li, Juan ;
Li, Tiankuo ;
Fan, Shidi ;
Wang, Yuhao .
IEEE SYSTEMS JOURNAL, 2022, 16 (04) :5845-5855
[32]   Multiscale Deep Feature Learning for Human Activity Recognition Using Wearable Sensors [J].
Tang, Yin ;
Zhang, Lei ;
Min, Fuhong ;
He, Jun .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2023, 70 (02) :2106-2116
[33]   Triple Cross-Domain Attention on Human Activity Recognition Using Wearable Sensors [J].
Tang, Yin ;
Zhang, Lei ;
Teng, Qi ;
Min, Fuhong ;
Song, Aiguo .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (05) :1167-1176
[34]   Deep learning for sensor-based activity recognition: A survey [J].
Wang, Jindong ;
Chen, Yiqiang ;
Hao, Shuji ;
Peng, Xiaohui ;
Hu, Lisha .
PATTERN RECOGNITION LETTERS, 2019, 119 :3-11
[35]   Sequential Weakly Labeled Multiactivity Localization and Recognition on Wearable Sensors Using Recurrent Attention Networks [J].
Wang, Kun ;
He, Jun ;
Zhang, Lei .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2021, 51 (04) :355-364
[36]   Attention-Based Convolutional Neural Network for Weakly Labeled Human Activities' Recognition With Wearable Sensors [J].
Wang, Kun ;
He, Jun ;
Zhang, Lei .
IEEE SENSORS JOURNAL, 2019, 19 (17) :7598-7604
[37]  
Wang Yulin, 2020, Advances in Neural Information Processing Systems, V33
[38]   Learning Disentangled Representation for Mixed-Reality Human Activity Recognition With a Single IMU Sensor [J].
Xia, Songpengcheng ;
Chu, Lei ;
Pei, Ling ;
Zhang, Zixuan ;
Yu, Wenxian ;
Qiu, Robert C. .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[39]  
Yang JB, 2015, PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), P3995
[40]   Resolution Adaptive Networks for Efficient Inference [J].
Yang, Le ;
Han, Yizeng ;
Chen, Xi ;
Song, Shiji ;
Dai, Jifeng ;
Huang, Gao .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2366-2375