Comparing Cross-Subject Performance on Human Activities Recognition Using Learning Models

被引:2
作者
Yang, Zhe [1 ]
Qu, Mengjie [1 ]
Pan, Yun [1 ]
Huan, Ruohong [2 ]
机构
[1] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou 310023, Peoples R China
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Feature extraction; Training data; Deep learning; Machine learning; Testing; Sensors; Random forests; Human factors; Cross-subject; deep learning; human activity recognition; leave one subject out; traditional machine learning; SENSORS;
D O I
10.1109/ACCESS.2022.3204739
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human activities recognition (HAR) plays a vital role in fields like ambient assisted living and health monitoring, in which cross-subject recognition is one of the main challenges coming from the diversity of various users. Although recent studies have achieved satisfactory results in a non-cross-subject condition, the recognition performance has significant degradation under the cross-subject criterion. In this paper, we evaluate three traditional machine learning methods and five deep neural network architectures under the same metrics on three popular HAR datasets: mHealth, PAMAP2, and UCIDSADS. The experimental results show that traditional machine learning approaches are generally more robust to the new subject scenarios under strict leave-one-subject-out cross-validation. Extra analysis indicates that hand-crafted features are one major reason for the better performance of traditional machine learning on cross-subject HAR, while deep learning is more prone to learning subject-dependent features under an end-to-end training process. A novel training strategy for decision-tree-based methods is also proposed in this paper, resulting in an improvement on the random forest model which achieves competitive performance at an average F1-score (accuracy) of 94.49% (95.09%), 91.64% (92.21%), and 92.70% (93.29%) on the three datasets, compared with state-of-the-art solutions for cross-subject HAR.
引用
收藏
页码:95179 / 95196
页数:18
相关论文
共 68 条
  • [41] Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition
    Maitre, Julien
    Bouchard, Kevin
    Gaboury, Sebastien
    [J]. MOBILE NETWORKS & APPLICATIONS, 2021, 26 (05) : 2076 - 2086
  • [42] Mekruksavanich Sakorn, 2020, 2020 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON), P75, DOI 10.1109/ECTIDAMTNCON48261.2020.9090711
  • [43] Human activity recognition based on smartphone using fast feature dimensionality reduction technique
    Mohammed Hashim, B. A.
    Amutha, R.
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (02) : 2365 - 2374
  • [44] Mukhopadhyay S., 2018, PROC IEEE INT 895 IN, P1
  • [45] HAREDNet: A deep learning based architecture for autonomous video surveillance by recognizing human actions
    Nasir, Inzamam Mashood
    Raza, Mudassar
    Shah, Jamal Hussain
    Wang, Shui-Hua
    Tariq, Usman
    Khan, Muhammad Attique
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2022, 99
  • [46] Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
    Ordonez, Francisco Javier
    Roggen, Daniel
    [J]. SENSORS, 2016, 16 (01)
  • [47] A Hybrid Hierarchical Framework for Gym Physical Activity Recognition and Measurement Using Wearable Sensors
    Qi, Jun
    Yang, Po
    Hanneghan, Martin
    Tang, Stephen
    Zhou, Bo
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (02) : 1384 - 1393
  • [48] Imaging and fusing time series for wearable sensor-based human activity recognition
    Qin, Zhen
    Zhang, Yibo
    Meng, Shuyu
    Qin, Zhiguang
    Choo, Kim-Kwang Raymond
    [J]. INFORMATION FUSION, 2020, 53 (53) : 80 - 87
  • [49] Chest-Worn Inertial Sensors: A Survey of Applications and Methods
    Rahmani, Mohammad Hasan
    Berkvens, Rafael
    Weyn, Maarten
    [J]. SENSORS, 2021, 21 (08)
  • [50] Ravi N., 2005, P 17 C INN APPL ART, V3, P1541