HMGAN: A Hierarchical Multi-Modal Generative Adversarial Network Model for Wearable Human Activity Recognition

被引:13
作者
Chen, Ling [1 ,2 ]
Hu, Rong [1 ,3 ]
Wu, Menghan [1 ,3 ]
Zhou, Xin [1 ,4 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Alibab Zhejiang Univ Joint Res Inst Frontier Tech, 38 Zheda Rd, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, 38 Zheda Rd, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Sch Software Technol, 38 Zheda Rd, Hangzhou 310027, Peoples R China
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2023年 / 7卷 / 03期
关键词
Wearable human activity recognition; multi-modal; generative adversarial network; VITAL SIGN; ACCELERATION;
D O I
10.1145/3610909
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Wearable Human Activity Recognition (WHAR) is an important research field of ubiquitous and mobile computing. Deep WHAR models suffer from the overfitting problem caused by the lack of a large amount and variety of labeled data, which is usually addressed by generating data to enlarge the training set, i.e., Data Augmentation (DA). Generative Adversarial Networks (GANs) have shown their excellent data generation ability, and the generalization ability of a classification model can be improved by GAN-based DA. However, existing GANs cannot make full use of the important modality information and fail to balance modality details and global consistency, which cannot meet the requirements of deep multi-modal WHAR. In this paper, a hierarchical multi-modal GAN model (HMGAN) is proposed for WHAR. HMGAN consists of multiple modal generators, one hierarchical discriminator, and one auxiliary classifier. Multiple modal generators can learn the complex multi-modal data distributions of sensor data. Hierarchical discriminator can provide discrimination outputs for both low-level modal discrimination losses and high-level overall discrimination loss to draw a balance between modality details and global consistency. Experiments on five public WHAR datasets demonstrate that HMGAN achieves the state-of-the-art performance for WHAR, outperforming the best baseline by an average of 3.4%, 3.8%, and 3.5% in accuracy, macro F1 score, and weighted F1 score, respectively.
引用
收藏
页数:27
相关论文
共 94 条
[1]   Synthetic Sensor Data for Human Activity Recognition [J].
Alharbi, Fayez ;
Ouarbya, Lahcen ;
Ward, Jamie A. .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[2]  
Alzantot M, 2017, INT CONF PERVAS COMP
[3]  
Anguita Davide., 2013, ESANN, V3, page, P3
[4]  
Arjovsky M, 2017, Arxiv, DOI [arXiv:1701.07875, DOI 10.48550/ARXIV.1701.07875]
[5]   Activity recognition from user-annotated acceleration data [J].
Bao, L ;
Intille, SS .
PERVASIVE COMPUTING, PROCEEDINGS, 2004, 3001 :1-17
[6]  
Belghazi MI, 2018, PR MACH LEARN RES, V80
[7]   A unified generative model using generative adversarial network for activity recognition [J].
Chan, Mang Hong ;
Noor, Mohd Halim Mohd .
JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (07) :8119-8128
[8]   A Systematic Study of Unsupervised Domain Adaptation for Robust Human-Activity Recognition [J].
Chang, Youngjae ;
Mathur, Akhil ;
Isopoussu, Anton ;
Song, Junehwa ;
Kawsar, Fahim .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (01)
[9]  
Chen C, 2015, IEEE IMAGE PROC, P168, DOI 10.1109/ICIP.2015.7350781
[10]   SALIENCE: An Unsupervised User Adaptation Model for Multiple Wearable Sensors Based Human Activity Recognition [J].
Chen, Ling ;
Zhang, Yi ;
Miao, Shenghuan ;
Zhu, Sirou ;
Hu, Rong ;
Peng, Liangying ;
Lv, Mingqi .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (09) :5492-5503