Gesture recognition system based on cross-domain CSI extracted from Wi-Fi devices combined with the 3D CNN

被引:7
作者
Bulugu, Isack [1 ]
机构
[1] Univ Dar Es Salaam, Dept Elect & Telecommun, Dar Es Salaam, Tanzania
关键词
Gesture recognition; Human-computer interaction; Wi-Fi; Channel state information; Cross-domain; 3D convolutional neural network;
D O I
10.1007/s11760-023-02545-8
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Gesture recognition offers a wide range of applications in human-computer interaction. Wi-Fi devices have been deployed almost everywhere in recent years, thanks to the rapid expansion of wireless communication, the Internet of Things, and the emergence of data about Wi-Fi channel state information(CSI). Currently, most existing CSI gesture recognition studies solely focus on gesture recognition in a known domain. In the case of an unknown domain, new data from unknown scenes must be added for additional learning and training; otherwise, recognition accuracy will be significantly reduced, limiting practicality. To address this problem, a CSI cross-domain gesture recognition approach based on 3D convolutional neural networks is proposed. The method realizes cross-scene gesture recognition by extracting domain-independent features, and combining such features with the 3D convolutional neural network learning model. The experiment uses public datasets to verify the approach. The findings demonstrate that the technique achieves 89.67% recognition accuracy in known scenes and 86.34% recognition accuracy in unknown scenes, indicating that it can recognize cross-scene gestures recognition.
引用
收藏
页码:3201 / 3209
页数:9
相关论文
共 24 条
[1]  
Abdelnasser H, 2015, IEEE CONF COMPUT, P17, DOI 10.1109/INFCOMW.2015.7179321
[2]  
Ali Shan E., 2020, 2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT). Proceedings, P69, DOI 10.1109/IAICT50021.2020.9172037
[3]  
Byun S.-W., 2019, J MICROMACHINES, P1
[4]   Detecting and Recognizing Human-Object Interactions [J].
Gkioxari, Georgia ;
Girshick, Ross ;
Dollar, Piotr ;
He, Kaiming .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8359-8367
[5]   ONE-HANDED GESTURE RECOGNITION USING ULTRASONIC DOPPLER SONAR [J].
Kalgaonkar, Kaustubh ;
Raj, Bhiksha .
2009 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1- 8, PROCEEDINGS, 2009, :1889-+
[6]   Detecting human-object interaction with multi-level pairwise feature network [J].
Liu, Hanchao ;
Mu, Tai-Jiang ;
Huang, Xiaolei .
COMPUTATIONAL VISUAL MEDIA, 2021, 7 (02) :229-239
[7]  
Nandakumar Rajalakshmi, 2017, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, V1, DOI [10.1145/3131897, 10.1145/3131897]
[8]   Boosting fine-grained activity sensing by embracing wireless multipath effects [J].
Niu, Kai ;
Zhang, Fusang ;
Xiong, Jie ;
Li, Xiang ;
Yi, Enze ;
Zhang, Daqing .
CONEXT'18: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON EMERGING NETWORKING EXPERIMENTS AND TECHNOLOGIES, 2018, :139-151
[9]   Widar: Decimeter-Level Passive Tracking via Velocity Monitoring with Commodity Wi-Fi [J].
Qian, Kun ;
Wu, Chenshu ;
Yang, Zheng ;
Liu, Yunhao ;
Jamieson, Kyle .
MOBIHOC'17: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL SYMPOSIUM ON MOBILE AD HOC NETWORKING AND COMPUTING, 2017,
[10]  
Shen S., 2016, P 14 ANN INT C MOBIL, P85, DOI [DOI 10.1145/2906388.2906407, 10.1145/2906388.2906407]