Enhanced Aiot Multi-Modal Fusion for Human Activity Recognition in Ambient Assisted Living Environment

被引:1
作者
Patel, Ankit D. [1 ]
Jhaveri, Rutvij H. [1 ]
Patel, Ashish D. [2 ]
Shah, Kaushal A. [1 ]
Shah, Jigarkumar [3 ]
机构
[1] Pandit Deendayal Energy Univ, Sch Technol, CSE Dept, Gandhinagar, India
[2] SVM Inst Technol, Dept Comp Engn, Bharuch, India
[3] Pandit Deendayal Energy Univ, Sch Technol, ICT Dept, Gandhinagar, India
关键词
CNN; deep learning; edge computing; edge devices; human activity recognition; LSTM; multi modal fusion; spatial features; time series analysis; FEATURES;
D O I
10.1002/spe.3394
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
MethodologyHuman activity recognition (HAR) has emerged as a fundamental capability in various disciplines, including ambient assisted living, healthcare, human-computer interaction, etc. This study proposes a novel approach for activity recognition by integrating IoT technologies with Artificial Intelligence and Edge Computing. This work presents a fusion HAR approach that combines data readings from wearable sensors such as accelerometer and gyroscope sensors and Images captured by vision-based sensors such as cameras incorporating the capabilities of Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) models. The aim of fusing these models is to capture and extract the temporal and spatial information, improving the accuracy and resilience of activity identification systems. The work uses the CNN model to find spatial features from the images that represent the contextual information of the activities and the LSTM model for processing sequential accelerometer and gyroscope sensor data to extract the temporal dynamics from the human activities.ResultsThe performance of our fusion approach is evaluated through different experiments using varying parameters and applies the best-suited parameters for our model. The results demonstrate that the fusion of LSTM and CNN models outperforms standalone models and traditional fusion methods, achieving an accuracy of 98%, which is almost 9% higher than standalone models.ConclusionThe fusion of LSTM and CNN models enables the integration of complementary information from both data sources, leading to improved performance. The computation tasks are performed at the local edge device resulting to enhanced privacy and reduced latency. Our approach greatly impacts real-world applications where accurate and reliable HAR systems are essential for enhancing human-machine interaction and monitoring human activities in various domains.
引用
收藏
页码:731 / 747
页数:17
相关论文
共 68 条
[61]   Multilevel Sensor Fusion With Deep Learning [J].
Vielzeuf, Valentin ;
Lechervy, Alexis ;
Pateux, Stephane ;
Jurie, Frederic .
IEEE SENSORS LETTERS, 2019, 3 (01)
[62]   Deep Learning Models for Real-time Human Activity Recognition with Smartphones [J].
Wan, Shaohua ;
Qi, Lianyong ;
Xu, Xiaolong ;
Tong, Chao ;
Gu, Zonghua .
MOBILE NETWORKS & APPLICATIONS, 2020, 25 (02) :743-755
[63]   LSTM-CNN Architecture for Human Activity Recognition [J].
Xia, Kun ;
Huang, Jianguang ;
Wang, Hanyu .
IEEE ACCESS, 2020, 8 :56855-56866
[64]   Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition [J].
Yen, Chih-Ta ;
Liao, Jia-Xian ;
Huang, Yi-Kai .
SENSORS, 2021, 21 (24)
[65]   A multi-scale feature extraction fusion model for human activity recognition [J].
Zhang, Chuanlin ;
Cao, Kai ;
Lu, Limeng ;
Deng, Tao .
SCIENTIFIC REPORTS, 2022, 12 (01)
[66]   A Novel IoT-Perceptive Human Activity Recognition (HAR) Approach Using Multihead Convolutional Attention [J].
Zhang, Haoxi ;
Xiao, Zhiwen ;
Wang, Juan ;
Li, Fei ;
Szczerbicki, Edward .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (02) :1072-1080
[67]   Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System [J].
Zhou, Haiyang ;
Zhao, Yixin ;
Liu, Yanzhong ;
Lu, Sichao ;
An, Xiang ;
Liu, Qiang .
SENSORS, 2023, 23 (10)
[68]   A Hybrid CNN-LSTM Network for the Classification of Human Activities Based on Micro-Doppler Radar [J].
Zhu, Jianping ;
Chen, Haiquan ;
Ye, Wenbin .
IEEE ACCESS, 2020, 8 :24713-24720