Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

被引:5
作者
Guo, Changru [1 ]
Zhang, Yingwei [2 ,3 ]
Chen, Yiqiang [3 ]
Xu, Chenyang [4 ]
Wang, Zhong [1 ]
机构
[1] Lanzhou Univ, Sch Comp Sci & Engn, Lanzhou 730000, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Tianjin Univ, Sch Comp Sci, Tianjin 300072, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 12期
关键词
Human activity recognition; Self-supervised learning; Task analysis; Data models; Time series analysis; Internet of Things; Face recognition; Contrastive learning (CL); human activity recognition (HAR); intermodality; intramodality; self-supervised; AUTHENTICATION PROTOCOL; RESOURCE-ALLOCATION; TRUST MODEL; SCHEME; COMMUNICATION; EFFICIENT; NETWORK; ACCESS; MANAGEMENT; SECURE;
D O I
10.1109/JIOT.2024.3379019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
引用
收藏
页码:21750 / 21762
页数:13
相关论文
共 50 条
  • [31] Sensor Data Augmentation by Resampling in Contrastive Learning for Human Activity Recognition
    Wang, Jinqiang
    Zhu, Tao
    Gan, Jingyuan
    Chen, Liming Luke
    Ning, Huansheng
    Wan, Yaping
    [J]. IEEE SENSORS JOURNAL, 2022, 22 (23) : 22994 - 23008
  • [32] Wearable-sensors Based Activity Recognition for Smart Human Healthcare Using Internet of Things
    Hu, Ning
    Su, Shen
    Tang, Chang
    Wang, Lulu
    [J]. 2020 16TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC, 2020, : 1909 - 1915
  • [33] Invariant Feature Learning for Sensor-Based Human Activity Recognition
    Hao, Yujiao
    Zheng, Rong
    Wang, Boyu
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (11) : 4013 - 4024
  • [34] Multitask LSTM Model for Human Activity Recognition and Intensity Estimation Using Wearable Sensor Data
    Barut, Onur
    Zhou, Li
    Luo, Yan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (09) : 8760 - 8768
  • [35] A Practical Wearable Sensor-based Human Activity Recognition Research Pipeline
    Liu, Hui
    Hartmann, Yale
    Schultz, Tanja
    [J]. HEALTHINF: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2021, : 847 - 856
  • [36] Human Activity Recognition Based on Dynamic Active Learning
    Bi, Haixia
    Perello-Nieto, Miquel
    Santos-Rodriguez, Raul
    Flach, Peter
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (04) : 922 - 934
  • [37] Wearable sensor-based pattern mining for human activity recognition: deep learning approach
    Bijalwan, Vishwanath
    Semwal, Vijay Bhaskar
    Gupta, Vishal
    [J]. INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2022, 49 (01): : 21 - 33
  • [38] Few-shot transfer learning for wearable IMU-based human activity recognition
    Ganesha H.S.
    Gupta R.
    Gupta S.H.
    Rajan S.
    [J]. Neural Computing and Applications, 2024, 36 (18) : 10811 - 10823
  • [39] Cosmo: Contrastive Fusion Learning with Small Data for Multimodal Human Activity Recognition
    Ouyang, Xiaomin
    Shuai, Xian
    Zhou, Jiayu
    Shi, Ivy Wang
    Xie, Zhiyuan
    Xing, Guoliang
    Huang, Jianwei
    [J]. PROCEEDINGS OF THE 2022 THE 28TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2022, 2022, : 324 - 337
  • [40] Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances
    Zhang, Shibo
    Li, Yaxuan
    Zhang, Shen
    Shahabi, Farzad
    Xia, Stephen
    Deng, Yu
    Alshurafa, Nabil
    [J]. SENSORS, 2022, 22 (04)