A Real-Time Dynamic Gesture Variability Recognition Method Based on Convolutional Neural Networks

被引:0
作者
Amangeldy, Nurzada [1 ]
Milosz, Marek [2 ]
Kudubayeva, Saule [1 ]
Kassymova, Akmaral [3 ]
Kalakova, Gulsim [4 ]
Zhetkenbay, Lena [1 ]
机构
[1] LN Gumilyov Eurasian Natl Univ, Fac Informat Technol, Dept Artificial Intelligence Technol, Pushkina 11, Astana 010008, Kazakhstan
[2] Lublin Univ Technol, Fac Elect Engn & Comp Sci, Dept Comp Sci, Nadbystrzycka 36B, PL-20618 Lublin, Poland
[3] Zhangir Khan Univ, Fac Econ Informat Technol & Vocat Educ, Higher Sch Informat Technol, Zhangir Khan 51, Uralsk 090009, Kazakhstan
[4] A Baitursynov Kostanay Reg Univ, Dept Phys Math & Digital Technol, Kostanay 110000, Kazakhstan
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 19期
关键词
sign language recognition; CNN; multimodality; preprocessing; SIGN-LANGUAGE RECOGNITION; VIDEO;
D O I
10.3390/app131910799
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Among the many problems in machine learning, the most critical ones involve improving the categorical response prediction rate based on extracted features. In spite of this, it is noted that most of the time from the entire cycle of multi-class machine modeling for sign language recognition tasks is spent on data preparation, including collection, filtering, analysis, and visualization of data. To find the optimal solution for the above-mentioned problem, this paper proposes a methodology for automatically collecting the spatiotemporal features of gestures by calculating the coordinates of the found area of the pose and hand, normalizing them, and constructing an optimal multilayer perceptron for multiclass classification. By extracting and analyzing spatiotemporal data, the proposed method makes it possible to identify not only static features, but also the spatial (for gestures that touch the face and head) and dynamic features of gestures, which leads to an increase in the accuracy of gesture recognition. This classification was also carried out according to the form of the gesture demonstration to optimally extract the characteristics of gestures (display ability of all connection points), which also led to an increase in the accuracy of gesture recognition for certain classes to the value of 0.96. This method was tested using the well-known Ankara University Turkish Sign Language Dataset and the Dataset for Argentinian Sign Language to validate the experiment, which proved effective with a recognition accuracy of 0.98.
引用
收藏
页数:18
相关论文
共 33 条
  • [1] American Sign Language Words Recognition Using Spatio-Temporal Prosodic and Angle Features: A Sequential Learning Approach
    Abdullahi, Sunusi Bala
    Chamnongthai, Kosin
    [J]. IEEE ACCESS, 2022, 10 : 15911 - 15923
  • [2] Real-time sign language framework based on wearable device: analysis of MSL, DataGlove, and gesture recognition
    Ahmed, M. A.
    Zaidan, B. B.
    Zaidan, A. A.
    Alamoodi, A. H.
    Albahri, O. S.
    Al-Qaysi, Z. T.
    Albahri, A. S.
    Salih, Mahmood M.
    [J]. SOFT COMPUTING, 2021, 25 (16) : 11101 - 11122
  • [3] Based on wearable sensory device in 3D-printed humanoid: A new real-time sign language recognition system
    Ahmed, M. A.
    Zaidan, B. B.
    Zaidan, A. A.
    Salih, Mahmood M.
    Al-qaysi, Z. T.
    Alamoodi, A. H.
    [J]. MEASUREMENT, 2021, 168
  • [4] Extension of interval-valued Pythagorean FDOSM for evaluating and benchmarking real-time SLRSs based on multidimensional criteria of hand gesture recognition and sensor glove perspectives
    Al-Samarraay, Mohammed S.
    Zaidan, A. A.
    Albahri, O. S.
    Pamucar, Dragan
    AlSattar, H. A.
    Alamoodi, A. H.
    Zaidan, B. B.
    Albahri, A. S.
    [J]. APPLIED SOFT COMPUTING, 2022, 116
  • [5] A pattern recognition model for static gestures in malaysian sign language based on machine learning techniques
    Alrubayi, Ali H.
    Ahmed, M. A.
    Zaidan, A. A.
    Albahri, A. S.
    Zaidan, B. B.
    Albahri, O. S.
    Alamoodi, A. H.
    Alazab, Mamoun
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2021, 95
  • [6] Amangeldy N., 2022, J. Theor. Appl. Inf. Technol, V100, P5506
  • [7] Sign Language Recognition Method Based on Palm Definition Model and Multiple Classification
    Amangeldy, Nurzada
    Kudubayeva, Saule
    Kassymova, Akmaral
    Karipzhanova, Ardak
    Razakhova, Bibigul
    Kuralov, Serikbay
    [J]. SENSORS, 2022, 22 (17)
  • [8] British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language
    Bird, Jordan J.
    Ekart, Aniko
    Faria, Diego R.
    [J]. SENSORS, 2020, 20 (18) : 1 - 19
  • [9] Design and Implementation of Deep Learning Based Contactless Authentication System Using Hand Gestures
    Dayal, Aveen
    Paluru, Naveen
    Cenkeramaddi, Linga Reddy
    Soumya, J.
    Yalavarthy, Phaneendra K.
    [J]. ELECTRONICS, 2021, 10 (02) : 1 - 15
  • [10] Isolated Sign Recognition from RGB Video using Pose Flow and Self-Attention
    De Coster, Mathieu
    Van Herreweghe, Mieke
    Dambre, Joni
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3436 - 3445