Siamese Convolutional Neural Network for ASL Alphabet Recognition

被引:0
作者
Fierro Radilla, Atoany Nazareth [1 ]
Perez Daniel, Karina Ruby [1 ]
机构
[1] Univ Panamer, Engn Fac, Mexico City, DF, Mexico
来源
COMPUTACION Y SISTEMAS | 2020年 / 24卷 / 03期
关键词
Siamese network; CNN; ASL alphabet recognition; similarity learning; deep learning; HAND-GESTURE RECOGNITION;
D O I
10.13053/CyS-24-3-3481
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
American sign language is an important communication way to convey information among the deaf community in North America and is primarily used by people who have hearing or speech impairments. The deaf community faces a struggle in schools and other institutions because they usually consist primarily of hearing people. Besides, deaf people often feel misunderstood by people who do not know sign language, for example, family members. In the last two decades, researchers have been proposing automatic sign language recognition systems to facilitate the learning of sign language, and nowadays, computer scientists have focused on using artificial intelligence in order to develop a system capable of reducing the communication gap between hearing and deaf people. In this paper, it is proposed a Siamese convolutional neural network for American sign language alphabet recognition. This siamese architecture allows the computer to reduce the high interclass similarity and high intraclass variations. The results show that the proposed method outperforms the state-of-the-art systems.
引用
收藏
页码:1211 / 1218
页数:8
相关论文
共 11 条
  • [1] User-Independent American Sign Language Alphabet Recognition Based on Depth Image and PCANet Features
    Aly, Walaa
    Aly, Saleh
    Almotairi, Sultan
    [J]. IEEE ACCESS, 2019, 7 : 123138 - 123150
  • [2] [Anonymous], 2015, Proceedings of the IEEE conference on computer vision and pattern recognition workshops
  • [3] Fierro Atoany N., 2019, Inf. tecnol., V30, P243
  • [4] Real-time sign language recognition using a consumer depth camera
    Kuznetsova, Alina
    Leal-Taixe, Laura
    Rosenhahn, Bodo
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2013, : 83 - 90
  • [5] Human-computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns
    Maqueda, Ana I.
    del-Blanco, Carlos R.
    Jaureguizar, Fernando
    Garcia, Narciso
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2015, 141 : 126 - 137
  • [6] Fast hand posture classification using depth features extracted from random line segments
    Nai, Weizhi
    Liu, Yue
    Rempel, David
    Wang, Yongtian
    [J]. PATTERN RECOGNITION, 2017, 65 : 1 - 10
  • [7] Pugeault N, 2011, 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), DOI 10.1109/ICCVW.2011.6130290
  • [8] Hand gesture recognition using DWT and F-ratio based feature descriptor
    Sahoo, Jaya Prakash
    Ari, Samit
    Ghosh, Dipak Kumar
    [J]. IET IMAGE PROCESSING, 2018, 12 (10) : 1780 - 1787
  • [9] Salem A., 2017, EXPERT SYST, V34, P1
  • [10] American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion
    Tao, Wenjin
    Leu, Ming C.
    Yin, Zhaozheng
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2018, 76 : 202 - 213