Transformer-based models to deal with heterogeneous environments in Human Activity Recognition

被引:4
作者
Ek S. [1 ]
Portet F. [1 ]
Lalanda P. [1 ]
机构
[1] Univ Grenoble Alpes, CNRS, Grenoble INP, Grenoble
关键词
Data heterogeneity; Human Activity Recognition; Machine learning; Transformers;
D O I
10.1007/s00779-023-01776-3
中图分类号
学科分类号
摘要
Human Activity Recognition (HAR) on mobile devices has been demonstrated to be possible using neural models trained on data collected from the device’s inertial measurement units. These models have used convolutional neural networks (CNNs), long short-term memory (LSTMs), transformers, or a combination of these to achieve state-of-the-art results with real-time performance. However, these approaches have not been extensively evaluated in real-world situations where the input data may be different from the training data. This paper highlights the issue of data heterogeneity in machine learning applications and how it can hinder their deployment in pervasive settings. To address this problem, we propose and publicly release the code of two sensor-wise transformer architectures called HART and MobileHART for Human Activity Recognition Transformer. Our experiments on several publicly available datasets show that these HART architectures outperform previous architectures with fewer floating point operations and parameters than conventional transformers. The results also show they are more robust to changes in mobile position or device brand and hence better suited for the heterogeneous environments encountered in real-life settings. Finally, the source code has been made publicly available. © 2023, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
引用
收藏
页码:2267 / 2280
页数:13
相关论文
共 47 条
[1]  
Weiser M., The computer for the 21st century, Sci Am, 265, 3, pp. 94-105, (1991)
[2]  
Becker C., Julien C., Lalanda P., Zambonelli F., Pervasive computing middleware: current trends and emerging challenges, CCF Trans Pervasive Comput Interact, 1, (2019)
[3]  
Liu R., Ramli A.A., Zhang H., Henricson E., Liu X., An overview of human activity recognition using wearable sensors: healthcare and artificial intelligence, Internet of Things - ICIOT 2021, pp. 1-14, (2022)
[4]  
Roggen D., Troster G., Lukowicz P., Ferscha A., Millan J.R., Chavarriaga R., Opportunistic human activity and context recognition, Computer, 46, 2, pp. 36-45, (2012)
[5]  
Blachon D., Cokun D., Portet F., On-line context aware physical activity recognition from the accelerometer and audio sensors of smartphones, European Conference on Ambient Intelligence. Ambient Intelligence, 8850, pp. 205-220, (2014)
[6]  
Lecun Y., Bengio Y., Et al., Convolutional networks for images, speech, and time series, The Handbook of Brain Theory and Neural Networks, 3361, 10, (1995)
[7]  
Gers F.A., Schmidhuber J., Cummins F., Learning to forget: continual prediction with LSTM, Neural Comput, 12, 10, pp. 2451-2471, (2000)
[8]  
Zhang Y., Wang L., Chen H., Tian A., Zhou S., Guo Y., IF-ConvTransformer: a framework for human activity recognition using IMU fusion and ConvTransformer, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6, 2, (2022)
[9]  
Kim Y.-W., Cho W.-H., Kim K.-S., Lee S., Inertial-measurement-unit-based novel human activity recognition algorithm using conformer, Sensors, 22, 10, (2022)
[10]  
Gu F., Chung M.-H., Chignell M., Valaee S., Zhou B., Liu X., A survey on deep learning for human activity recognition, ACM Comput. Surv (CSUR), 54, 8, pp. 1-34, (2021)