User-Definable Dynamic Hand Gesture Recognition Based on Doppler Radar and Few-Shot Learning

被引:15
作者
Zeng, Xianglong [1 ]
Wu, Chaoyang [2 ]
Ye, Wen-Bin [2 ]
机构
[1] Shenzhen Univ, Sch Optoelect Engn, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen 518060, Peoples R China
关键词
Task analysis; Feature extraction; Radar; Spectrogram; Sensors; Doppler radar; Gesture recognition; Dynamic hand gesture recognition; micro-Doppler spectrogram; deep learning; meta-learning; few-shot learning;
D O I
10.1109/JSEN.2021.3107943
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, the radar-based dynamic hand gesture recognition (DHGR) system has been widely used in the non-contact interaction with smart electronic devices because of its advantages of safety, privacy security and robustness to different illumination environments. In order to achieve high-precision gesture recognition, Machine Learning algorithms, especially the deep learning algorithms are generally chosen by researchers. However, most deep learning models are trained based on large gesture dataset and only accept certain predefined specified gestures as input, i.e., user-defined gestures are not allowed. In this work, a neural network model trained with meta-learning method is proposed to deal with the few-shot classification task and realize the user-definable DHGR. Instead of learning a mapping between the unknown input gesture and the fixed predefined classes, the proposed model is trained to learn to compare the input gesture with arbitrary gestures entered by users, and then decide which class of gesture is the most probable one. Without retraining, the model can allow the user to enter the self-defined gestures for one or a few times and then recognize these gestures next time. To the best of our knowledge, this is the first work to apply the meta few-shot learning to the DHGR problem in the true sense. In this model, an embedding module is used in the network to extract the features of the input micro-Doppler spectrograms, and a comparison module is used to execute the feature-level comparison. Finally, a weighting module is proposed to weigh the comparison results and make the prediction. The evaluation result shows that with only one support sample for each class (i.e., for the one-shot tasks), the proposed model achieves an accuracy of 91% for 5 gesture classes and 90% for 10 gesture classes. When using 3 support samples (i.e., for the 3-shot tasks), the accuracies for 5 and 10 gestures are further improved to 96% and 92%, respectively.
引用
收藏
页码:23224 / 23233
页数:10
相关论文
共 50 条
  • [1] Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge
    Mauro, Gianfranco
    Chmurski, Mateusz
    Servadei, Lorenzo
    Pegalajar-Cuellar, Manuel
    Morales-Santos, Diego P.
    IEEE ACCESS, 2022, 10 : 29741 - 29759
  • [2] Category-Extensible Human Activity Recognition Based on Doppler Radar by Few-Shot Learning
    Liu, Ziyu
    Wu, Chaoyang
    Ye, Wenbin
    IEEE SENSORS JOURNAL, 2022, 22 (22) : 21952 - 21960
  • [3] FS-HGR: Few-Shot Learning for Hand Gesture Recognition via Electromyography
    Rahimian, Elahe
    Zabihi, Soheil
    Asif, Amir
    Farina, Dario
    Atashzar, Seyed Farokh
    Mohammadi, Arash
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2021, 29 : 1004 - 1015
  • [4] FEW-SHOT LEARNING FOR DECODING SURFACE ELECTROMYOGRAPHY FOR HAND GESTURE RECOGNITION
    Rahimian, Elahe
    Zabihi, Soheil
    Asif, Amir
    Atashzar, S. Farokh
    Mohammadi, Arash
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1300 - 1304
  • [5] Radar target recognition based on few-shot learning
    Yang, Yue
    Zhang, Zhuo
    Mao, Wei
    Li, Yang
    Lv, Chengang
    MULTIMEDIA SYSTEMS, 2023, 29 (05) : 2865 - 2875
  • [6] Radar target recognition based on few-shot learning
    Yue Yang
    Zhuo Zhang
    Wei Mao
    Yang Li
    Chengang Lv
    Multimedia Systems, 2023, 29 : 2865 - 2875
  • [7] PreGesNet: Few-Shot Acoustic Gesture Recognition Based on Task-Adaptive Pretrained Networks
    Zou, Yongpan
    Wang, Yunshu
    Dong, Haozhi
    Wang, Yaqing
    He, Yanbo
    Wu, Kaishun
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12811 - 12829
  • [8] Img2Acoustic: A Cross-Modal Gesture Recognition Method Based on Few-Shot Learning
    Zou, Yongpan
    Weng, Jianhao
    Kuang, Wenting
    Jiao, Yang
    Leung, Victor C. M.
    Wu, Kaishun
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 1496 - 1512
  • [9] Few-shot learning for ear recognition
    Zhang, Jie
    Yu, Wen
    Yang, Xudong
    Deng, Fang
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO AND SIGNAL PROCESSING (IVSP 2019), 2019, : 50 - 54
  • [10] A Multimodal Dynamic Hand Gesture Recognition Based on Radar-Vision Fusion
    Liu, Haoming
    Liu, Zhenyu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72