A computer vision-based system for recognition and classification of Urdu sign language dataset

被引:0
|
作者
Zahid H. [1 ,5 ]
Rashid M. [2 ]
Syed S.A. [3 ]
Ullah R. [4 ]
Asif M. [5 ]
Khan M. [3 ]
Mujeeb A.A. [6 ]
Khan A.H. [6 ]
机构
[1] Biomedical Engineering Department and Electrical Engineering Department, Ziauddin University, Karachi
[2] Electrical Engineering Department and Software Engineering Department, Ziauddin University, Karachi
[3] Biomedical Engineering Department, Sir Syed University of Engineering and Technology, Karachi
[4] Optimizia, Karachi
[5] Electrical Engineering Department, Ziauddin University, Karachi
[6] Biomedical Engineering Department, Ziauddin University, Karachi
关键词
Bag of words; KNN; Pattern recognition; Random Forest; Sign language; SVM; Urdu sign language;
D O I
10.7717/PEERJ-CS.1174
中图分类号
学科分类号
摘要
Human beings rely heavily on social communication as one of the major aspects of communication. Language is the most effective means of verbal and nonverbal communication and association. To bridge the communication gap between deaf people communities, and non-deaf people, sign language is widely used. According to the World Federation of the Deaf, there are about 70 million deaf people present around the globe and about 300 sign languages being used. Hence, the structural form of the hand gestures involving visual motions and signs is used as a communication system to help the deaf and speech-impaired community for daily interaction. The aim is to collect a dataset of Urdu sign language (USL) and test it through a machine learning classifier. The overview of the proposed system is divided into four main stages i.e., data collection, data acquisition, training model ad testing model. The USL dataset which is comprised of 1,560 images was created by photographing various hand positions using a camera. This work provides a strategy for automated identification of USL numbers based on a bag-of-words (BoW) paradigm. For classification purposes, support vector machine (SVM), Random Forest, and K-nearest neighbor (K-NN) are used with the BoWhistogram bin frequencies as characteristics. The proposed technique outperforms others in number classification, attaining the accuracies of 88%, 90%, and 84% for the random forest, SVM, and K-NN respectively. © 2022 Zahid et al.
引用
收藏
相关论文
共 50 条
  • [1] A computer vision-based system for recognition and classification of Urdu sign language dataset
    Zahid, Hira
    Rashid, Munaf
    Syed, Sidra Abid
    Ullah, Rafi
    Asif, Muhammad
    Khan, Muzammil
    Mujeeb, Amenah Abdul
    Khan, Ali Haider
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [2] Vision-based sign language recognition system: A Comprehensive Review
    Sharma, Sakshi
    Singh, Sukhwinder
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT-2020), 2020, : 140 - 144
  • [3] Recognition of Urdu sign language: a systematic review of the machine learning classification
    Zahid, Hira
    Rashid, Munaf
    Hussain, Samreen
    Azim, Fahad
    Syed, Sidra Abid
    Saad, Afshan
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [4] Recognition of Urdu sign language: a systematic review of the machine learning classification
    Zahid H.
    Rashid M.
    Hussain S.
    Azim F.
    Syed S.A.
    Saad A.
    PeerJ Computer Science, 2022, 8
  • [5] Vision-based continuous sign language recognition using multimodal sensor fusion
    Maher Jebali
    Abdesselem Dakhli
    Mohammed Jemni
    Evolving Systems, 2021, 12 : 1031 - 1044
  • [6] Vision-based continuous sign language recognition using multimodal sensor fusion
    Jebali, Maher
    Dakhli, Abdesselem
    Jemni, Mohammed
    EVOLVING SYSTEMS, 2021, 12 (04) : 1031 - 1044
  • [7] Chinese Sign Language Recognition for a Vision-Based Multi-features Classifier
    Yang Quan
    Peng Jinye
    ISCSCT 2008: INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND COMPUTATIONAL TECHNOLOGY, VOL 2, PROCEEDINGS, 2008, : 194 - +
  • [8] VISION-BASED SIGN LANGUAGE TRANSLATION DEVICE
    Madhuri, Yellapu
    Anitha, G.
    Anburajan, M.
    2013 INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND EMBEDDED SYSTEMS (ICICES), 2013, : 565 - 568
  • [9] Sign Language Gesture Recognition through Computer Vision
    Nyaga, Casam Njagi
    Wario, Ruth Diko
    2018 IST-AFRICA WEEK CONFERENCE (IST-AFRICA), 2018,
  • [10] Vision-based gesture recognition system for human-computer interaction
    Trigueiros, Paulo
    Ribeiro, Fernando
    Reis, Luis Paulo
    COMPUTATIONAL VISION AND MEDICAL IMAGE PROCESSING IV, 2014, : 137 - 142