Improving Cross-Subject Activity Recognition via Adversarial Learning

被引:8
作者
Leite, Clayton Frederick Souza [1 ]
Xiao, Yu [1 ]
机构
[1] Aalto Univ, Dept Commun & Networking, Espoo 02150, Finland
基金
欧盟地平线“2020”;
关键词
Training; Machine learning; Feature extraction; Activity recognition; Training data; Degradation; Generators; Human activity recognition; deep learning; adversarial learning; data augmentation; cross-subject performance; GESTURE RECOGNITION; HAND;
D O I
10.1109/ACCESS.2020.2993818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has been widely used for implementing human activity recognition from wearable sensors like inertial measurement units. The performance of deep activity recognition is heavily affected by the amount and variability of the labeled data available for training the deep learning models. On the other hand, it is costly and time-consuming to collect and label data. Given limited training data, it is hard to maintain high performance across a wide range of subjects, due to the differences in the underlying data distribution of the training and the testing sets. In this work, we develop a novel solution that applies adversarial learning to improve cross-subject performance by generating training data that mimic artificial subjects - i.e. through data augmentation - and enforcing the activity classifier to ignore subject-dependent information. Contrary to domain adaptation methods, our solution does not utilize any data from subjects of the test set (or target domain). Furthermore, our solution is versatile as it can be utilized together with any deep neural network as the classifier. Considering the open dataset PAMAP2, nearly 10 & x0025; higher cross-subject performance in terms of F1-score can be achieved when training a CNN-LSTM-based classifier with our solution. A performance gain of 5 & x0025; is also observed when our solution is applied to a state-of-the-art HAR classifier composed of a combination of inception neural network and recurrent neural network. We also investigate different influencing factors of classification performance (i.e. selection of sensor modalities, sampling rates and the number of subjects in the training data), and summarize a practical guideline for implementing deep learning solutions for sensor-based human activity recognition.
引用
收藏
页码:90542 / 90554
页数:13
相关论文
共 50 条
  • [41] TFTL: A Task-Free Transfer Learning Strategy for EEG-Based Cross-Subject and Cross-Dataset Motor Imagery BCI
    Wang, Yihan
    Wang, Jiaxing
    Wang, Weiqun
    Su, Jianqiang
    Bunterngchit, Chayut
    Hou, Zeng-Guang
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2025, 72 (02) : 810 - 821
  • [42] CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 1502 - 1513
  • [43] EEG Feature Selection for Emotion Recognition Based on Cross-subject Recursive Feature Elimination
    Zhang, Wei
    Yin, Zhong
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 6256 - 6261
  • [44] Research on indoor thermal sensation variation and cross-subject recognition based on electroencephalogram signals
    Zheng, Hanying
    Pan, Liling
    Li, Tingxun
    JOURNAL OF BUILDING ENGINEERING, 2023, 76
  • [45] MDTL: A Novel and Model-Agnostic Transfer Learning Strategy for Cross-Subject Motor Imagery BCI
    Li, Ang
    Wang, Zhenyu
    Zhao, Xi
    Xu, Tianheng
    Zhou, Ting
    Hu, Honglin
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 : 1743 - 1753
  • [46] HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning
    Zhang, Xiao
    Yu, Hongzheng
    Yang, Yang
    Gu, Jingjing
    Li, Yujun
    Zhuang, Fuzhen
    Yu, Dongxiao
    Ren, Zhaochun
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (03) : 939 - 951
  • [47] Multimodal Semi-Supervised Domain Adaptation Using Cross-Modal Learning and Joint Distribution Alignment for Cross-Subject Emotion Recognition
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    Grande-Barreto, Jonas
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [48] Age Factor Removal Network Based on Transfer Learning and Adversarial Learning for Cross-Age Face Recognition
    Du, Lingshuang
    Hu, Haifeng
    Wu, Yongbo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2830 - 2842
  • [49] Cross-domain activity recognition via transfer learning
    Hu, Derek Hao
    Zheng, Vincent Wenchen
    Yang, Qiang
    PERVASIVE AND MOBILE COMPUTING, 2011, 7 (03) : 344 - 358
  • [50] Fusing Frequency-Domain Features and Brain Connectivity Features for Cross-Subject Emotion Recognition
    Chen, Chuangquan
    Li, Zhencheng
    Wan, Feng
    Xu, Leicai
    Bezerianos, Anastasios
    Wang, Hongtao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71