Attention-Based Cross-Domain Gesture Recognition Using WiFi Channel State Information

被引:1
作者
Hong, Hao [1 ]
Huang, Baoqi [1 ]
Gu, Yu [2 ]
Jia, Bing [1 ]
机构
[1] Inner Mongolia Univ, Coll Comp Sci, Engn Res Ctr Ecol Big Data, Minist Educ,Inner Mongolia Key Lab Wireless Netwo, Hohhot 010021, Peoples R China
[2] Hefei Univ Technol, Sch Comp & Informat, Hefei 230009, Peoples R China
来源
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT II | 2022年 / 13156卷
基金
中国国家自然科学基金;
关键词
Cross-domain; Gesture recognition; Channel state information; Attention mechanism; Commodity WiFi;
D O I
10.1007/978-3-030-95388-1_38
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Gesture recognition is an important step to realize ubiquitous WiFi-based human-computer interaction. However, most current WiFi-based gesture recognition systems rely on domain-specific training. To address this issue, we propose an attention-based cross-domain gesture recognition system using WiFi channel state information. In order to overcome the shortcoming of handcrafted feature extraction in stateof-the-art cross-domain models, our model uses the attention mechanism to automatically extract domain-independent gesture features from spatial and temporal dimensions. We implement the model and extensively evaluate its performance by using the Widar3 dataset involving 16 users and 6 gestures across 5 orientations and 5 positions in 3 different environments. The evaluation results show that, the average in-domain gesture recognition accuracy achieved by the model is 99.67% and the average cross-domain gesture recognition accuracies are 96.57%, 97.86% and 94.2%, respectively, in terms of rooms, positions and orientations. Its cross-domain gesture recognition accuracy significantly outperforms state-of-the-art methods.
引用
收藏
页码:571 / 585
页数:15
相关论文
共 36 条
  • [1] Abdelnasser H, 2015, IEEE CONF COMPUT, P17, DOI 10.1109/INFCOMW.2015.7179321
  • [2] Keystroke Recognition Using WiFi Signals
    Ali, Kamran
    Liu, Alex X.
    Wang, Wei
    Shahzad, Muhammad
    [J]. MOBICOM '15: PROCEEDINGS OF THE 21ST ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, 2015, : 90 - 102
  • [3] A Tutorial on Human Activity Recognition Using Body-Worn Inertial Sensors
    Bulling, Andreas
    Blanke, Ulf
    Schiele, Bernt
    [J]. ACM COMPUTING SURVEYS, 2014, 46 (03)
  • [4] Cao Z., 2020, IEEE INFOCOM SER
  • [5] Dual Attention Network for Scene Segmentation
    Fu, Jun
    Liu, Jing
    Tian, Haijie
    Li, Yong
    Bao, Yongjun
    Fang, Zhiwei
    Lu, Hanqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3141 - 3149
  • [6] Detecting and Recognizing Human-Object Interactions
    Gkioxari, Georgia
    Girshick, Ross
    Dollar, Piotr
    He, Kaiming
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8359 - 8367
  • [7] Gu Y., 2020, Domain-specific language model pretraining for biomedical natural language processing
  • [8] Tool Release: Gathering 802.11n Traces with Channel State Information
    Halperin, Daniel
    Hu, Wenjun
    Sheth, Anmol
    Wetherall, David
    [J]. ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2011, 41 (01) : 53 - 53
  • [9] Hao L, 2021, IEEE INTERNET THINGS
  • [10] Huang B., 2019, IEEE T MOBILE COMPUT