Continuous word level sign language recognition using an expert system based on machine learning

被引:0
作者
Sreemathy R. [1 ]
Turuk M.P. [1 ]
Chaudhary S. [1 ]
Lavate K. [1 ]
Ushire A. [1 ]
Khurana S. [1 ]
机构
[1] Department of Electronics and Telecommunication Engineering, Pune Institute of Computer Technology, Pune
来源
International Journal of Cognitive Computing in Engineering | 2023年 / 4卷
关键词
Continuous gesture recognition; Expert system; Mediapipe; Sign language recognition; Static gesture recognition; SVM; YOLOv4;
D O I
10.1016/j.ijcce.2023.04.002
中图分类号
学科分类号
摘要
The study of sign language recognition systems has been extensively explored using many image processing and artificial intelligence techniques for many years, but the main challenge is to bridge the communication gap between specially-abled people and the general public. This paper proposes a python-based system that classifies 80 words from sign language. Two different models have been proposed in this work: You Only Look Once version 4 (YOLOv4) and Support Vector Machine (SVM) with media-pipe. SVM utilizes the linear, polynomial and Radial Basis Function (RBF) kernels. The system does not need any additional pre-processing and image enhancement operations. The image dataset used in this work is self-created and consists of 80 static signs with a total of 676 images. The accuracy of SVM with media-pipe is 98.62% and the accuracy of YOLOv4 obtained is 98.8% which is higher than the existing state-of-the-art methods. An expert system is also proposed which utilizes both the above models to predict the hand gesture more accurately in real-time. © 2023
引用
收藏
页码:170 / 178
页数:8
相关论文
共 32 条
  • [1] Ahmad F., Faisal M., A novel hybrid methodology for computing semantic similarity between sentences through various word senses, International Journal of Cognitive Computing in Engineering, 3, pp. 58-77, (2022)
  • [2] Alawwad R.A., Bchir O., Ismail M.M.B., Arabic sign language recognition using faster R-CNN, International Journal of Advanced Computer Science and Applications, 12, 3, (2021)
  • [3] Ali A.-S., CEVIK M., ALQARAGHULI A., American sign language recognition using YOLOv4 method, International Journal of Multidisciplinary Studies and Innovative Technologies, 6, 1, pp. 61-65, (2022)
  • [4] Ashiquzzaman A., Lee H., Kim K., Kim H.-Y., Park J., Kim J., Compact spatial pyramid pooling deep convolutional neural network based hand gestures decoder, Applied Sciences, 10, 21, (2020)
  • [5] Bankar S., Kadam T., Korhale V., Kulkarni M.A., Real time sign language recognition using deep learning, International Research Journal of Engineering and Technology, 9, 4, pp. 955-959, (2022)
  • [6] Benitez-Garcia G., Prudente-Tixteco L., Castro-Madrid L.C., Toscano-Medina R., Olivares-Mercado J., Sanchez-Perez G., Villalba L.J.G., Improving real-time hand gesture recognition with semantic segmentation, Sensors, 21, 2, (2021)
  • [7] Chen M.-Y., Chiang H.-S., Sangaiah A.K., Hsieh T.-C., Recurrent neural network with attention mechanism for language model, Neural Computing and Applications, 32, pp. 7915-7923, (2020)
  • [8] Dima T.F., Ahmed M.E., Using YOLOv5 algorithm to detect and recognize American sign language, 2021 International conference on information technology (ICIT), pp. 603-607, (2021)
  • [9] Duan H., Zhao Y., Chen K., Lin D., Dai B., Revisiting skeleton-based action recognition, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2969-2978, (2022)
  • [10] Elboushaki A., Hannane R., Afdel K., Koutti L., MultiD-CNN: a multi-dimensional feature learning approach based on deep convolutional networks for gesture recognition in RGB-D image sequences, Expert Systems with Applications, 139, (2020)