Spatio-Temporal Transformer with Kolmogorov-Arnold Network for Skeleton-Based Hand Gesture Recognition

被引:0
作者
Han, Pengcheng [1 ]
He, Xin [1 ]
Matsumaru, Takafumi [1 ]
Dutta, Vibekananda [2 ,3 ]
机构
[1] Waseda Univ, Grad Sch Informat Prod & Syst, Kitakyushu 8080135, Japan
[2] Warsaw Univ Technol, Inst Micromech & Photon, Fac Mechatron, PL-00661 Warsaw, Poland
[3] Waseda Univ, Waseda Inst Adv Study, Tokyo 1698050, Japan
关键词
hand gesture recognition; human-computer interaction (HCI); skeleton based; deep learning; graph convolutional networks; transformer; attention mechanism; feature extraction; continuous hand gesture recognition;
D O I
10.3390/s25030702
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Manually crafted features often suffer from being subjective, having an inadequate accuracy, or lacking in robustness in recognition. Meanwhile, existing deep learning methods often overlook the structural and dynamic characteristics of the human hand, failing to fully explore the contextual information of joints in both the spatial and temporal domains. To effectively capture dependencies between the hand joints that are not adjacent but may have potential connections, it is essential to learn long-term relationships. This study proposes a skeleton-based hand gesture recognition framework, the ST-KT, a spatio-temporal graph convolution network, and a transformer with the Kolmogorov-Arnold Network (KAN) model. It incorporates spatio-temporal graph convolution network (ST-GCN) modules and a spatio-temporal transformer module with KAN (KAN-Transformer). ST-GCN modules, which include a spatial graph convolution network (SGCN) and a temporal convolution network (TCN), extract primary features from skeleton sequences by leveraging the strength of graph convolutional networks in the spatio-temporal domain. A spatio-temporal position embedding method integrates node features, enriching representations by including node identities and temporal information. The transformer layer includes a spatial KAN-Transformer (S-KT) and a temporal KAN-Transformer (T-KT), which further extract joint features by learning edge weights and node embeddings, providing richer feature representations and the capability for nonlinear modeling. We evaluated the performance of our method on two challenging skeleton-based dynamic gesture datasets: our method achieved an accuracy of 97.5% on the SHREC'17 track dataset and 94.3% on the DHG-14/28 dataset. These results demonstrate that our proposed method, ST-KT, effectively captures dynamic skeleton changes and complex joint relationships.
引用
收藏
页数:23
相关论文
共 40 条
[1]  
Boulahia SY, 2017, INT CONF IMAG PROC
[2]   Dynamic hand gesture recognition using RGB-D data for natural human-computer interaction [J].
Cai Linqin ;
Cui Shuangjie ;
Xiang Min ;
Yu Jimin ;
Zhang Jianrong .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2017, 32 (05) :3495-3507
[3]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[4]  
Chen Q, 2007, IEEE IMTC P, P875
[5]   MFA-Net: Motion Feature Augmented Network for Dynamic Hand Gesture Recognition from Skeletal Data [J].
Chen, Xinghao ;
Wang, Guijin ;
Guo, Hengkai ;
Zhang, Cairong ;
Wang, Hang ;
Zhang, Li .
SENSORS, 2019, 19 (02)
[6]  
Chen YX, 2019, Arxiv, DOI arXiv:1907.08871
[7]  
De Smedt Q., 2017, Eurographics Workshop on 3D Object Retrieval, P1, DOI DOI 10.2312/3DOR.20171049
[8]   Skeleton-based Dynamic hand gesture recognition [J].
De Smedt, Quentin ;
Wannous, Hazem ;
Vandeborre, Jean-Philippe .
PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, :1206-1214
[9]   Segmentation and Recognition of Fingers Using Microsoft Kinect [J].
Desai, Smit .
PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COMMUNICATION AND NETWORKS, 2017, 508 :45-53
[10]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, 10.48550/arXiv.2010.11929, DOI 10.48550/ARXIV.2010.11929]