Multi-modal zero-shot dynamic hand gesture recognition

被引:12
作者
Rastgoo, Razieh [1 ]
Kiani, Kourosh [1 ]
Escalera, Sergio [2 ,3 ]
Sabokrou, Mohammad [4 ]
机构
[1] Semnan Univ, Elect & Comp Engn Dept, Semnan 3513119111, Iran
[2] Univ Barcelona, Dept Math & Informat, Barcelona, Spain
[3] Univ Barcelona, Comp Vis Ctr, Barcelona, Spain
[4] Inst Res Fundamental Sci IPM, Tehran 193955746, Iran
关键词
Hand gesture recognition; Zero-shot learning; Deep learning; Transformer; Multi-modal;
D O I
10.1016/j.eswa.2024.123349
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero-Shot Learning (ZSL) has rapidly advanced in recent years. Towards overcoming the annotation bottleneck in the Dynamic Hand Gesture Recognition (DHGR), we explore the idea of Zero-Shot Dynamic Hand Gesture Recognition (ZS-DHGR) with no annotated visual examples, by leveraging their textual descriptions. In this way, we propose a multi-modal Zero-Shot Dynamic Hand Gesture Recognition (ZS-DHGR) model harnessing from the complementary capabilities of deep features fused with the skeleton-based ones. A Transformerbased model along with a C3D model is used for hand detection and deep features extraction, respectively. To make a trade -off between the dimensionality of the skeleton-based and deep features, we use an AutoEncoder (AE) on top of the Long Short Term Memory (LSTM) network. Finally, a semantic space is used to map the visual features to the lingual embedding of the class labels, achieved via the Bidirectional Encoder Representations from Transformers (BERT) model. Results on four large-scale datasets, RKS-PERSIANSIGN, First-Person, ASLVID, and isoGD, show the superiority of the proposed model compared to state-of-the-art alternatives in ZS-DHGR. The proposed model obtains an accuracy of 74.6% , 67.2% , 68.8% , 60.2% on the RKS-PERSIANSIGN, First-Person, ASLVID, and isoGD datasets, respectively.
引用
收藏
页数:10
相关论文
共 59 条
[1]  
Alexiou I, 2016, IEEE IMAGE PROC, P4190, DOI 10.1109/ICIP.2016.7533149
[2]  
[Anonymous], 2009, P NEURIPS
[3]  
Athitsos V, 2008, PROC CVPR IEEE, P1666
[4]  
Bank D., 2020, arXiv
[5]   Recurrent Neural Network for electromyographic gesture recognition in transhumeral amputees [J].
Barron, Olivier ;
Raison, Maxime ;
Gaudet, Guillaume ;
Achiche, Sofiane .
APPLIED SOFT COMPUTING, 2020, 96
[6]  
Bilge Y., 2019, BMVC
[7]  
Bishay M, 2019, Arxiv, DOI [arXiv:1907.09021, DOI 10.5244/C.33.130]
[8]  
Brown TB, 2020, ADV NEUR IN, V33
[9]   OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields [J].
Cao, Zhe ;
Hidalgo, Gines ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) :172-186
[10]  
Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13