Efficient Sign Language Recognition System and Dataset Creation Method Based on Deep Learning and Image Processing

被引:2
作者
Cavalcante Carneiro, A. L. [1 ]
Silva, L. Brito [1 ]
Pinheiro Salvadeo, D. H. [1 ]
机构
[1] State Univ Sao Paulo, Dept Stat Appl Math & Computat, Av 24A,1515, BR-13506700 Rio Claro, SP, Brazil
来源
THIRTEENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2021) | 2021年 / 11878卷
关键词
Deep learning; sign language recognition; convolutional neural networks; image classification; object detection;
D O I
10.1117/12.2601018
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
New deep-learning architectures are created every year, achieving state-of-the-art results in image recognition and leading to the belief that, in a few years, complex tasks such as sign language translation will be considerably easier, serving as a communication tool for the hearing-impaired community. On the other hand, these algorithms still need a lot of data to be trained and the dataset creation process is expensive, time-consuming, and slow. Thereby, this work aims to investigate techniques of digital image processing and machine learning that can be used to create a sign language dataset effectively. We argue about data acquisition, such as the frames per second rate to capture or subsample the videos, the background type, preprocessing, and data augmentation, using convolutional neural networks and object detection to create an image classifier and comparing the results based on statistical tests. Different datasets were created to test the hypotheses, containing 14 words used daily and recorded by different smartphones in the RGB color system. We achieved an accuracy of 96.38% on the test set and 81.36% on the validation set containing more challenging conditions, showing that 30 FPS is the best frame rate subsample to train the classifier, geometric transformations work better than intensity transformations, and artificial background creation is not effective to model generalization. These trade-offs should be considered in future work as a cost-benefit guideline between computational cost and accuracy gain when creating a dataset and training a sign recognition model.
引用
收藏
页数:9
相关论文
共 18 条
  • [1] [Anonymous], 2018, WHO Deafness and hearing loss
  • [2] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [3] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [4] Cruz A. R. S., 2020, THESIS FEDERAL U AMA
  • [5] AutoAugment: Learning Augmentation Strategies from Data
    Cubuk, Ekin D.
    Zoph, Barret
    Mane, Dandelion
    Vasudevan, Vijay
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 113 - 123
  • [6] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [7] Geron A, 2019, HANDS ON MACHINE LEA, V31
  • [8] Gonzalez R. C., 2007, PEARSON, V3
  • [9] Huang J, 2018, AAAI CONF ARTIF INTE, P2257
  • [10] Sign Language Learning System with Image Sampling and Convolutional Neural Network
    Ji, Yangho
    Kim, Sunmok
    Lee, Ki-Baek
    [J]. 2017 FIRST IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2017, : 371 - 375