Adversarial Deep Feature Extraction Network for User Independent Human Activity Recognition

被引:8
作者
Suh, Sungho [1 ,2 ]
Rey, Vitor Fortes [1 ,2 ]
Lukowicz, Paul [1 ,2 ]
机构
[1] German Res Ctr Artificial Intelligence DFKI, D-67663 Kaiserslautern, Germany
[2] TU Kaiserslautern, Dept Comp Sci, D-67663 Kaiserslautern, Germany
来源
2022 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS (PERCOM) | 2022年
关键词
human activity recognition; domain generalization; adversarial learning; multi-task learning;
D O I
10.1109/PerCom53586.2022.9762387
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
User dependence remains one of the most difficult general problems in Human Activity Recognition (HAR), in particular when using wearable sensors. This is due to the huge variability of the way different people execute even the simplest actions. In addition, detailed sensor fixtures and placement will be different for different people or even at different times for the same users. In theory, the problem can be solved by a large enough data set. However, recording data sets that capture the entire diversity of complex activity sets is seldom practicable. Instead, models are needed that focus on features that are invariant across users. To this end, we present an adversarial subject-independent feature extraction method with the maximum mean discrepancy (MMD) regularization for human activity recognition. The proposed model is capable of learning a subject-independent embedding feature representation from multiple subjects datasets and generalizing it to unseen target subjects. The proposed network is based on the adversarial encoder-decoder structure with the MMD to realign the data distribution over multiple subjects. Experimental results show that the proposed method not only outperforms state-of-the-art methods over the four real-world datasets but also improves the subject generalization effectively. We evaluate the method on well-known public data sets showing that it significantly improves user-independent performance and reduces variance in results.
引用
收藏
页码:217 / 226
页数:10
相关论文
共 38 条
  • [1] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [2] Adversarial Multi-view Networks for Activity Recognition
    Bai, Lei
    Yao, Lina
    Wang, Xianzhi
    Kanhere, Salil S.
    Bin Guo
    Yu, Zhiwen
    [J]. PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (02):
  • [4] MoCapaci: Posture and gesture detection in loose garments using textile cables as capacitive antennas
    Bello, Hymalai
    Zhou, Bo
    Suh, Sungho
    Lukowicz, Paul
    [J]. IWSC'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 78 - 83
  • [5] The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition
    Chavarriaga, Ricardo
    Sagha, Hesam
    Calatroni, Alberto
    Digumarti, Sundara Tejaswi
    Troester, Gerhard
    Millan, Jose del R.
    Roggen, Daniel
    [J]. PATTERN RECOGNITION LETTERS, 2013, 34 (15) : 2033 - 2042
  • [6] Chen L., 2020, P ACM INTERACTIVE MO, V4, P1
  • [7] Transfer learning for activity recognition: a survey
    Cook, Diane
    Feuz, Kyle D.
    Krishnan, Narayanan C.
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2013, 36 (03) : 537 - 556
  • [8] RECOGNIZING FRIENDS BY THEIR WALK - GAIT PERCEPTION WITHOUT FAMILIARITY CUES
    CUTTING, JE
    KOZLOWSKI, LT
    [J]. BULLETIN OF THE PSYCHONOMIC SOCIETY, 1977, 9 (05) : 353 - 356
  • [9] Cross-person activity recognition using reduced kernel extreme learning machine
    Deng, Wan-Yu
    Zheng, Qing-Hua
    Wang, Zhong-Min
    [J]. NEURAL NETWORKS, 2014, 53 : 1 - 7
  • [10] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672