Attention-Based Hybrid Deep Learning Network for Human Activity Recognition Using WiFi Channel State Information

被引:16
作者
Mekruksavanich, Sakorn [1 ]
Phaphan, Wikanda [2 ]
Hnoohom, Narit [3 ]
Jitpattanakul, Anuchit [4 ,5 ]
机构
[1] Univ Phayao, Sch Informat & Commun Technol, Dept Comp Engn, Phayao 56000, Thailand
[2] King Mongkuts Univ Technol North Bangkok, Fac Appl Sci, Dept Appl Stat, Bangkok 10800, Thailand
[3] Mahidol Univ, Fac Engn, Dept Comp Engn, Nakhon Pathom 73170, Thailand
[4] King Mongkuts Univ Technol North Bangkok, Fac Appl Sci, Dept Math, Bangkok 10800, Thailand
[5] King Mongkuts Univ Technol North Bangkok, Sci & Technol Res Inst, Intelligent & Nonlinear Dynam Innovat Res Ctr, Bangkok 10800, Thailand
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 15期
关键词
human activity recognition; WiFi sensing; deep learning; attention mechanism; channel state information;
D O I
10.3390/app13158884
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The recognition of human movements is a crucial aspect of AI-related research fields. Although methods using vision and sensors provide more valuable data, they come at the expense of inconvenience to users and social limitations including privacy issues. WiFi-based sensing methods are increasingly being used to collect data on human activity due to their ubiquity, versatility, and high performance. Channel state information (CSI), a characteristic of WiFi signals, can be employed to identify various human activities. Traditional machine learning approaches depend on manually designed features, so recent studies propose leveraging deep learning capabilities to automatically extract features from raw CSI data. This research introduces a versatile framework for recognizing human activities by utilizing CSI data and evaluates its effectiveness on different deep learning networks. A hybrid deep learning network called CNN-GRU-AttNet is proposed to automatically extract informative spatial-temporal features from raw CSI data and efficiently classify activities. The effectiveness of a hybrid model is assessed by comparing it with five conventional deep learning models (CNN, LSTM, BiLSTM, GRU, and BiGRU) on two widely recognized benchmark datasets (CSI-HAR and StanWiFi). The experimental results demonstrate that the CNN-GRU-AttNet model surpasses previous state-of-the-art techniques, leading to an average accuracy improvement of up to 4.62%. Therefore, the proposed hybrid model is suitable for identifying human actions using CSI data.
引用
收藏
页数:22
相关论文
共 37 条
[1]  
Abdelnasser H, 2015, IEEE INFOCOM SER
[2]   A CSI-Based Multi-Environment Human Activity Recognition Framework [J].
Alsaify, Baha A. ;
Almazari, Mahmoud M. ;
Alazrai, Rami ;
Alouneh, Sahel ;
Daoud, Mohammad I. .
APPLIED SCIENCES-BASEL, 2022, 12 (02)
[3]   CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning [J].
Chahoushi, Mahnaz ;
Nabati, Mohammad ;
Asvadi, Reza ;
Ghorashi, Seyed Ali .
SENSORS, 2023, 23 (07)
[4]   WiFi CSI Based Passive Human Activity Recognition Using Attention Based BLSTM [J].
Chen, Zhenghua ;
Zhang, Le ;
Jiang, Chaoyang ;
Cao, Zhiguang ;
Cui, Wei .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2019, 18 (11) :2714-2724
[5]   CSI-Based Human Continuous Activity Recognition Using GMM-HMM [J].
Cheng, Xiaoyan ;
Huang, Binke .
IEEE SENSORS JOURNAL, 2022, 22 (19) :18709-18717
[6]   WiGId: Indoor Group Identification with CSI-Based Random Forest [J].
Dang, Xiaochao ;
Cao, Yuan ;
Hao, Zhanjun ;
Liu, Yang .
SENSORS, 2020, 20 (16) :1-18
[7]  
Elbayad M, 2018, Arxiv, DOI arXiv:1808.03867
[8]  
Gu Y, 2014, 2014 IEEE ASIA PACIFIC CONFERENCE ON WIRELESS AND MOBILE, P60, DOI 10.1109/APWiMob.2014.6920266
[9]   An Efficient ResNetSE Architecture for Smoking Activity Recognition from Smartwatch [J].
Hnoohom, Narit ;
Mekruksavanich, Sakorn ;
Jitpattanakul, Anuchit .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 35 (01) :1245-1259
[10]   Robust human activity recognition from depth video using spatiotemporal multi-fused features [J].
Jalal, Ahmad ;
Kim, Yeon-Ho ;
Kim, Yong-Joong ;
Kamal, Shaharyar ;
Kim, Daijin .
PATTERN RECOGNITION, 2017, 61 :295-308