Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

被引:5
作者
Guo, Changru [1 ]
Zhang, Yingwei [2 ,3 ]
Chen, Yiqiang [3 ]
Xu, Chenyang [4 ]
Wang, Zhong [1 ]
机构
[1] Lanzhou Univ, Sch Comp Sci & Engn, Lanzhou 730000, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Tianjin Univ, Sch Comp Sci, Tianjin 300072, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 12期
关键词
Human activity recognition; Self-supervised learning; Task analysis; Data models; Time series analysis; Internet of Things; Face recognition; Contrastive learning (CL); human activity recognition (HAR); intermodality; intramodality; self-supervised; AUTHENTICATION PROTOCOL; RESOURCE-ALLOCATION; TRUST MODEL; SCHEME; COMMUNICATION; EFFICIENT; NETWORK; ACCESS; MANAGEMENT; SECURE;
D O I
10.1109/JIOT.2024.3379019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
引用
收藏
页码:21750 / 21762
页数:13
相关论文
共 50 条
[41]   HUMAN ACTIVITY RECOGNITION FROM MOTION AND ACOUSTIC SENSORS USING CONTRASTIVE LEARNING [J].
Zhou, Rui ;
Zhao, Running ;
Ngai, Edith C. H. .
2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
[42]   Investigating Enhancements to Contrastive Predictive Coding for Human Activity Recognition [J].
Haresamudram, Harish ;
Essa, Irfan ;
Ploetz, Thomas .
2023 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS, PERCOM, 2023, :232-241
[43]   Deep Learning for Heterogeneous Human Activity Recognition in Complex IoT Applications [J].
Abdel-Basset, Mohamed ;
Hawash, Hossam ;
Chang, Victor ;
Chakrabortty, Ripon K. ;
Ryan, Michael .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08) :5653-5665
[44]   Learning hierarchical time series data augmentation invariances via contrastive supervision for human activity recognition [J].
Cheng, Dongzhou ;
Zhang, Lei ;
Bu, Can ;
Wu, Hao ;
Song, Aiguo .
KNOWLEDGE-BASED SYSTEMS, 2023, 276
[45]   Self-Supervised Human Activity Recognition With Localized Time-Frequency Contrastive Representation Learning [J].
Taghanaki, Setareh Rahimi ;
Rainbow, Michael ;
Etemad, Ali .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2023, 53 (06) :1027-1037
[46]   Contrastive Predictive Coding for Human Activity Recognition [J].
Haresamudram, Harish ;
Essa, Irfan ;
Plotz, Thomas .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (02)
[47]   A Boundary Consistency-Aware Multitask Learning Framework for Joint Activity Segmentation and Recognition With Wearable Sensors [J].
Xia, Songpengcheng ;
Chu, Lei ;
Pei, Ling ;
Yu, Wenxian ;
Qiu, Robert C. .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (03) :2984-2996
[48]   Human Activity Recognition Using Deep Residual Convolutional Network Based on Wearable Sensors [J].
Yu, Xugao ;
Al-qaness, Mohammed A. A. .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (03) :1950-1958
[49]   A survey on unsupervised learning for wearable sensor-based activity recognition [J].
Ige, Ayokunle Olalekan ;
Noor, Mohd Halim Mohd .
APPLIED SOFT COMPUTING, 2022, 127
[50]   More Reliable Neighborhood Contrastive Learning for Novel Class Discovery in Sensor-Based Human Activity Recognition [J].
Zhang, Mingcong ;
Zhu, Tao ;
Nie, Mingxing ;
Liu, Zhenyu .
SENSORS, 2023, 23 (23)