Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

被引:5
作者
Guo, Changru [1 ]
Zhang, Yingwei [2 ,3 ]
Chen, Yiqiang [3 ]
Xu, Chenyang [4 ]
Wang, Zhong [1 ]
机构
[1] Lanzhou Univ, Sch Comp Sci & Engn, Lanzhou 730000, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Tianjin Univ, Sch Comp Sci, Tianjin 300072, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 12期
关键词
Human activity recognition; Self-supervised learning; Task analysis; Data models; Time series analysis; Internet of Things; Face recognition; Contrastive learning (CL); human activity recognition (HAR); intermodality; intramodality; self-supervised; AUTHENTICATION PROTOCOL; RESOURCE-ALLOCATION; TRUST MODEL; SCHEME; COMMUNICATION; EFFICIENT; NETWORK; ACCESS; MANAGEMENT; SECURE;
D O I
10.1109/JIOT.2024.3379019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
引用
收藏
页码:21750 / 21762
页数:13
相关论文
共 50 条
  • [21] SigRep: Toward Robust Wearable Emotion Recognition With Contrastive Representation Learning
    Dissanayake, Vipula
    Seneviratne, Sachith
    Rana, Rajib
    Wen, Elliott
    Kaluarachchi, Tharindu
    Nanayakkara, Suranga
    IEEE ACCESS, 2022, 10 : 18105 - 18120
  • [22] Semi-Supervised Contrastive Learning for Human Activity Recognition
    Liu, Dongxin
    Abdelzaher, Tarek
    17TH ANNUAL INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SENSOR SYSTEMS (DCOSS 2021), 2021, : 45 - 53
  • [23] A survey on wearable sensor modality centred human activity recognition in health care
    Wang, Yan
    Cang, Shuang
    Yu, Hongnian
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 137 : 167 - 190
  • [24] Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning
    Fu, Zhongzheng
    He, Xinrun
    Wang, Enkai
    Huo, Jun
    Huang, Jian
    Wu, Dongrui
    SENSORS, 2021, 21 (03) : 1 - 23
  • [25] Self-Supervised Contrastive Learning for Radar-Based Human Activity Recognition
    Rahman, Mohammad Mahbubur
    Gurbuz, Sevgi Zubeyde
    2023 IEEE RADAR CONFERENCE, RADARCONF23, 2023,
  • [26] Dynamic Temperature Scaling in Contrastive Self-Supervised Learning for Sensor-Based Human Activity Recognition
    Khaertdinov, Bulat
    Asteriadis, Stylianos
    Ghaleb, Esam
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2022, 4 (04): : 498 - 507
  • [27] HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning
    Zhang, Xiao
    Yu, Hongzheng
    Yang, Yang
    Gu, Jingjing
    Li, Yujun
    Zhuang, Fuzhen
    Yu, Dongxiao
    Ren, Zhaochun
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (03) : 939 - 951
  • [28] Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity Recognition
    Brinzea, Razvan
    Khaertdinov, Bulat
    Asteriadis, Stylianos
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [29] Contrastive Accelerometer-Gyroscope Embedding Model for Human Activity Recognition
    Koo, Inyong
    Park, Yeonju
    Jeong, Minki
    Kim, Changick
    IEEE SENSORS JOURNAL, 2023, 23 (01) : 506 - 513
  • [30] Sensor Data Augmentation by Resampling in Contrastive Learning for Human Activity Recognition
    Wang, Jinqiang
    Zhu, Tao
    Gan, Jingyuan
    Chen, Liming Luke
    Ning, Huansheng
    Wan, Yaping
    IEEE SENSORS JOURNAL, 2022, 22 (23) : 22994 - 23008