TransferSense: towards environment independent and one-shot wifi sensing

被引:18
作者
Bu Q. [1 ]
Ming X. [1 ]
Hu J. [1 ]
Zhang T. [1 ]
Feng J. [1 ]
Zhang J. [2 ]
机构
[1] Department of Computer Science, Northwest University, Xi’an
[2] Computer Science Department, Lamar University, Beaumont, TX
关键词
Channel state information; Human interaction; Transfer learning; WiFi;
D O I
10.1007/s00779-020-01480-6
中图分类号
学科分类号
摘要
WiFi has recently established itself as a powerful medium for radio frequency (RF) sensing due to its low cost and convenience. Many tasks, such as gesture recognition, activity recognition, and fall detection, can be implemented by measuring and calculating how the propagation of WiFi signals is affected by human activities. However, current WiFi-based sensing solutions have limited scales as they are designed for only a few activities and need to collect data and create training models in the same domain because the model established in a deployment environment is usually not applicable to new objects in the target domain. This paper presents TransferSense, an environment independent and one-shot WiFi sensing method based on deep learning. Firstly, amplitude and phase information of channel state information (CSI) are combined to increase the number of features to solve the problem of insufficient features due to single-source information. Secondly, TransferSense converts RF sensing tasks to image classification tasks and fuses low-level and high-level semantic features extracted from a pre-trained convolutional neural network to achieve an end-to-end high-precision sensing for activity recognition. Finally, TransferSense applies a transfer learning method with a small number of labeled samples in the target domain to perform high-precision cross-domain sensing, which can reduce the data collection cost in the target domain. We verified the effectiveness of TransferSense using two representative WiFi sensing applications, gait identification and sign recognition. In a single deployment environment, TransferSense achieved more than 97% human gait identification accuracy for 44 users and more than 81% sign language recognition for 100 isolated sign language words. In the case of new object recognition in the cross-domain sensing, TransferSense achieved more than 77% human gait identification accuracy for 10 new users, more than 88% sign language recognition for 10 new isolated sign language words, and more than 81% gesture identification for 2 new gestures. © 2021, Springer-Verlag London Ltd., part of Springer Nature.
引用
收藏
页码:555 / 573
页数:18
相关论文
共 44 条
[1]  
Virmani A., Shahzad M., Position and orientation agnostic gesture recognition using wifi, Proceedings of the 15Th Annual International Conference on Mobile Systems, Applications, and Services, pp. 252-264, (2017)
[2]  
Zou H., Yang J., Zhou Y., Xie L., Spanos C.J., Robust wifi-enabled device-free gesture recognition via unsupervised adversarial domain adaptation, In: 27Th International Conference on Computer Communication and Networks, ICCCN 2018, Hangzhou, China, July 30 - August 2, 2018, pp. 1-8, (2018)
[3]  
Abdelnasser H., Youssef M., Harras K.A., Wigest: A ubiquitous wifi-based gesture recognition system, 2015 IEEE Conference on Computer Communications (INFOCOM), pp. 1472-1480, (2015)
[4]  
He W., Wu K., Zou Y., Ming Z., Wig: Wifi-based gesture recognition system, 2015 24Th International Conference on Computer Communication and Networks (ICCCN), pp. 1-7, (2015)
[5]  
Sigg S., Shi S., Ji Y., Teach your wifi-device: Recognise simultaneous activities and gestures from time-domain rf-features, IJACI, 6, 1, pp. 20-34, (2014)
[6]  
Venkatnarayan R.H., Page G., Shahzad M., Multi-user gesture recognition using wifi, Proceedings of the 16Th Annual International Conference on Mobile Systems, Applications, and Services, Mobisys 2018, Munich, Germany, June 10-15, 2018. ACM, pp. 401-413, (2018)
[7]  
Zou H., Zhou Y., Yang J., Jiang H., Xie L., Spanos C.J., Wifi-enabled device-free gesture recognition for smart home automation, 14Th IEEE International Conference on Control and Automation, ICCA 2018, Anchorage, AK, USA, June, 12-15, 2018, pp. 476-481, (2018)
[8]  
Jiang W., Miao C., Ma F., Yao S., Wang Y., Ye Y., Xue H., Song C., Ma X., Koutsonikolas D., Wenyao X., Lu S., Towards environment independent device free human activity recognition, Proceedings of the 24Th Annual International Conference on Mobile Computing and Networking, Mobicom 2018, New Delhi, India, October 29 - November 02, 2018. ACM, pp. 289-304, (2018)
[9]  
Wang W., Liu A.X., Shahzad M., Ling K., Lu S., Understanding and modeling of wifi signal based human activity recognition, Proceedings of the 21St Annual International Conference on Mobile Computing and Networking, Mobicom ’15, pp. 65-76, (2015)
[10]  
Yang J., Zou H., Jiang H., Xie L., Fine-grained adaptive location-independent activity recognition using commodity wifi, In: Proceedings 2018 IEEE Wireless Communications and Networking Conference, pp. 1-6, (2018)