Multi-Stream General and Graph-Based Deep Neural Networks for Skeleton-Based Sign Language Recognition

被引:17
作者
Miah, Abu Saleh Musa [1 ]
Hasan, Md. Al Mehedi [2 ]
Jang, Si-Woong [3 ]
Lee, Hyoun-Sup [4 ]
Shin, Jungpil [1 ]
机构
[1] Univ Aizu, Sch Comp Sci & Engn, Aizu Wakamatsu 9658580, Japan
[2] Rajshahi Univ Engn & Technol RUET, Dept Comp Sci & Engn, Rajshahi 6204, Bangladesh
[3] Dong Eui Univ, Dept Comp Engn, Busan 47340, South Korea
[4] Dong Eui Univ, Dept Appl Software Engn, Busan 47340, South Korea
关键词
sign language recognition (SLR); large scale dataset; American Sign Language; Turkish Sign Language; Chinese Sign Language; AUTSL; CSL;
D O I
10.3390/electronics12132841
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex background, light illumination, and subject structures in videos, researchers still face challenges in developing effective SLR systems. Many researchers have recently sought to develop skeleton-based sign language recognition systems to overcome the subject and background variation in hand gesture sign videos. However, skeleton-based SLR is still under exploration, mainly due to a lack of information and hand key point annotations. More recently, researchers have included body and face information along with hand gesture information for SLR; however, the obtained performance accuracy and generalizability properties remain unsatisfactory. In this paper, we propose a multi-stream graph-based deep neural network (SL-GDN) for a skeleton-based SLR system in order to overcome the above-mentioned problems. The main purpose of the proposed SL-GDN approach is to improve the generalizability and performance accuracy of the SLR system while maintaining a low computational cost based on the human body pose in the form of 2D landmark locations. We first construct a skeleton graph based on 27 whole-body key points selected among 67 key points to address the high computational cost problem. Then, we utilize the multi-stream SL-GDN to extract features from the whole-body skeleton graph considering four streams. Finally, we concatenate the four different features and apply a classification module to refine the features and recognize corresponding sign classes. Our data-driven graph construction method increases the system's flexibility and brings high generalizability, allowing it to adapt to varied data. We use two large-scale benchmark SLR data sets to evaluate the proposed model: The Turkish Sign Language data set (AUTSL) and Chinese Sign Language (CSL). The reported performance accuracy results demonstrate the outstanding ability of the proposed model, and we believe that it will be considered a great innovation in the SLR domain.
引用
收藏
页数:15
相关论文
共 67 条
  • [1] JOLO-GCN: Mining Joint-Centered Light-Weight Information for Skeleton-Based Action Recognition
    Cai, Jinmiao
    Jiang, Nianjuan
    Han, Xiaoguang
    Jia, Kui
    Lu, Jiangbo
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2734 - 2743
  • [2] Multi-View Super Vector for Action Recognition
    Cai, Zhuowei
    Wang, Limin
    Peng, Xiaojiang
    Qiao, Yu
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 596 - 603
  • [3] A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training
    Cui, Runpeng
    Liu, Hu
    Zhang, Changshui
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (07) : 1880 - 1891
  • [4] Real-Time Hand Gesture Detection and Recognition Using Bag-of-Features and Support Vector Machine Techniques
    Dardas, Nasser H.
    Georganas, Nicolas D.
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2011, 60 (11) : 3592 - 3607
  • [5] Spatial-Temporal Graph Convolutional Networks for Sign Language Recognition
    de Amorim, Cleison Correia
    Macedo, David
    Zanchettin, Cleber
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 646 - 657
  • [6] Dozat T., 2016, 4 INT C LEARNING REP
  • [7] Emmorey K., 2001, Language, Cognition, and the Brain: Insights from Sign Language Research
  • [8] Gollapudi S., 2019, Learn Computer Vision Using OpenCV: With Deep Learning CNNs and RNNs, DOI [10.1007/978-1-4842-4261-2_2, DOI 10.1007/978-1-4842-4261-2_2]
  • [9] Guo D., 2018, P AAAI C ART INT ORL, VVolume 32
  • [10] Ensembled Transfer Learning Based Multichannel Attention Networks for Human Activity Recognition in Still Images
    Hirooka, Koki
    Hasan, Md. Al Mehedi
    Shin, Jungpil
    Srizon, Azmain Yakin
    [J]. IEEE ACCESS, 2022, 10 : 47051 - 47062