CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining

被引:14
作者
Hong, Zhiqing [1 ,2 ]
Li, Zelong [1 ]
Zhong, Shuxin [2 ]
Lyu, Wenjun [2 ]
Wang, Haotian [1 ]
Ding, Yi [3 ]
He, Tian [1 ]
Zhang, Desheng [2 ]
机构
[1] JD Logist, Beijing, Peoples R China
[2] Rutgers State Univ, New Brunswick, NJ 08901 USA
[3] Univ Texas Dallas, Richardson, TX USA
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 02期
关键词
Human activity recognition; Cross-dataset; Cross-domain; Self-supervised learning; MODEL;
D O I
10.1145/3659597
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The increasing availability of low-cost wearable devices and smartphones has significantly advanced the field of sensor-based human activity recognition (HAR), attracting considerable research interest. One of the major challenges in HAR is the domain shift problem in cross-dataset activity recognition, which occurs due to variations in users, device types, and sensor placements between the source dataset and the target dataset. Although domain adaptation methods have shown promise, they typically require access to the target dataset during the training process, which might not be practical in some scenarios. To address these issues, we introduce CrossHAR, a new HAR model designed to improve model performance on unseen target datasets. CrossHAR involves three main steps: (i) CrossHAR explores the sensor data generation principle to diversify the data distribution and augment the raw sensor data. (ii) CrossHAR then employs a hierarchical self-supervised pretraining approach with the augmented data to develop a generalizable representation. (iii) Finally, CrossHAR fine-tunes the pretrained model with a small set of labeled data in the source dataset, enhancing its performance in cross-dataset HAR. Our extensive experiments across multiple real-world HAR datasets demonstrate that CrossHAR outperforms current state-of-the-art methods by 10.83% in accuracy, demonstrating its effectiveness in generalizing to unseen target datasets.
引用
收藏
页数:26
相关论文
共 91 条
[1]   Attend and Discriminate: Beyond the State-of-the-Art for Human Activity Recognition UsingWearable Sensors [J].
Abedin, Alireza ;
Ehsanpour, Mahsa ;
Shi, Qinfeng ;
Rezatofighi, Hamid ;
Ranasinghe, Damith C. .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (01)
[2]  
[Anonymous], 2023, Android Studio
[3]  
[Anonymous], 2023, Android Document
[4]   IMU2Doppler: Cross-Modal Domain Adaptation for Doppler-based Activity Recognition Using IMU Data [J].
Bhalla, Sejal ;
Goel, Mayank ;
Khurana, Rushil .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (04)
[5]   Generative Adversarial Networks in Time Series: A Systematic Literature Review [J].
Brophy, Eoin ;
Wang, Zhengwei ;
She, Qi ;
Ward, Tomas .
ACM COMPUTING SURVEYS, 2023, 55 (10)
[6]   Device Orientation Independent Human Activity Recognition Model for Patient Monitoring Based on Triaxial Acceleration [J].
Caramaschi, Sara ;
Papini, Gabriele B. ;
Caiani, Enrico G. .
APPLIED SCIENCES-BASEL, 2023, 13 (07)
[7]   A Systematic Study of Unsupervised Domain Adaptation for Robust Human-Activity Recognition [J].
Chang, Youngjae ;
Mathur, Akhil ;
Isopoussu, Anton ;
Song, Junehwa ;
Kawsar, Fahim .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (01)
[8]   Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities [J].
Chen, Kaixuan ;
Zhang, Dalin ;
Yao, Lina ;
Guo, Bin ;
Yu, Zhiwen ;
Liu, Yunhao .
ACM COMPUTING SURVEYS, 2021, 54 (04)
[9]  
Chen L, 2020, PROC ACM INTERACT MO, V4, DOI [10.32861/ijefr.61.5.13, 10.1145/3381012]
[10]  
Chen SA, 2023, Arxiv, DOI [arXiv:2303.06053, 10.48550/arXiv.2303.06053, DOI 10.48550/ARXIV.2303.06053]