ContrastSense: Domain-invariant Contrastive Learning for In-the-Wild Wearable Sensing

被引:0
作者
Dai, Gaole [1 ]
Xu, Huatao [2 ]
Yoon, Hyungun [3 ]
Li, Mo [2 ]
Tan, Rui [1 ]
Lee, Sung-Ju [3 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
[2] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[3] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 04期
关键词
Wearable Sensing; Contrastive Learning; Domain Generalization; MOTION;
D O I
10.1145/3699744
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing wearable sensing models often struggle with domain shifts and class label scarcity. Contrastive learning is a promising technique to address class label scarcity, which however captures domain-related features and suffers from low- quality negatives. To address both problems, we propose ContrastSense, a domain-invariant contrastive learning scheme for a realistic wearable sensing scenario where domain shifts and class label scarcity are presented simultaneously. To capture domain-invariant information, ContrastSense exploits unlabeled data and domain labels specifying user IDs or devices to minimize the discrepancy across domains. To improve the quality of negatives, time and domain labels are leveraged to select samples and refine negatives. In addition, ContrastSense designs a parameter-wise penalty to preserve domain- invariant knowledge during fine-tuning to further maintain model robustness. Extensive experiments show that ContrastSense outperforms the state-of-the-art baselines by 8.9% on human activity recognition with inertial measurement units and 5.6% on gesture recognition with electromyography when presented with domain shifts across users. Besides, when presented with different kinds of domain shifts across devices, on-body positions, and datasets, ContrastSense achieves consistent improvements compared with the best baselines.
引用
收藏
页数:32
相关论文
共 73 条
  • [1] Bannis Adeola, 2023, ACM Transactions on Internet of Things, V4, P1
  • [2] Emotional experience in everyday life across the adult life span
    Carstensen, LL
    Pasupathi, M
    Mayr, U
    Nesselroade, JR
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 2000, 79 (04) : 644 - 655
  • [3] A Systematic Study of Unsupervised Domain Adaptation for Robust Human-Activity Recognition
    Chang, Youngjae
    Mathur, Akhil
    Isopoussu, Anton
    Song, Junehwa
    Kawsar, Fahim
    [J]. PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (01):
  • [4] Chen T, 2020, PR MACH LEARN RES, V119
  • [5] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [6] Chuang C.-Y., 2020, Advances in Neural Information Processing Systems
  • [7] Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning
    Cote-Allard, Ulysse
    Fall, Cheikh Latyr
    Drouin, Alexandre
    Campeau-Lecours, Alexandre
    Gosselin, Clement
    Glette, Kyrre
    Laviolette, Francois
    Gosselin, Benoit
    [J]. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2019, 27 (04) : 760 - 771
  • [8] Demsar J, 2006, J MACH LEARN RES, V7, P1
  • [9] Ganin Y, 2016, J MACH LEARN RES, V17
  • [10] Ganin Y, 2015, PR MACH LEARN RES, V37, P1180