Technological Solutions for Sign Language Recognition: A Scoping Review of Research Trends, Challenges, and Opportunities

被引:15
作者
Joksimoski, Boban [1 ]
Zdravevski, Eftim [1 ]
Lameski, Petre [1 ]
Pires, Ivan Miguel [2 ,3 ]
Melero, Francisco Jose [4 ,5 ]
Martinez, Tomas Puebla
Garcia, Nuno M. [2 ]
Mihajlov, Martin [6 ]
Chorbev, Ivan [1 ]
Trajkovik, Vladimir [1 ]
机构
[1] Saints Cyril & Methodius Univ Skopje, Fac Comp Sci & Engn, Skopje 1000, North Macedonia
[2] Univ Beira Interior, Inst Telecomun, P-6201001 Covilha, Portugal
[3] Univ Trasos Montes Alto Douro, Escola Ciencias Tecnol, P-5001801 Vila Real, Portugal
[4] Region Murcia CETEM, Technol Ctr Furniture & Wood, Murcia 30510, Spain
[5] Polytech Univ Cartagena, Telecommunicat Networks Engn Grp, Cartagena 30202, Spain
[6] Joef Stefan Inst, Lab Open Syst & Networks, Ljubljana 1000, Slovenia
关键词
Assistive technologies; Gesture recognition; Auditory system; Hidden Markov models; Visualization; Speech recognition; Systematics; Sign language recognition; systematic review; sign language visualization; HAND GESTURE RECOGNITION; MODEL; DEAF; INTERFACE; SPEECH; DAMAGE; TIME;
D O I
10.1109/ACCESS.2022.3161440
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sign languages are critical in conveying meaning by the use of a visual-manual modality and are the primary means of communication of the deaf and hard of hearing with their family members and with the society. With the advances in computer graphics, computer vision, neural networks, and the introduction of new powerful hardware, the research into sign languages has shown a new potential. Novel technologies can help people learn, communicate, interpret, translate, visualize, document, and develop various sign languages and their related skills. This paper reviews the technological advancements applied in sign language recognition, visualization, and synthesis. We defined multiple research questions to identify the underlying technological drivers that strive to improve the challenges in this domain. This study is designed in accordance with the PRISMA methodology. We searched for articles published between 2010 and 2021 in multiple digital libraries (i.e., Elsevier, Springer, IEEE, PubMed, and MDPI). To automate the initial steps of PRISMA for identifying potentially relevant articles, duplicate removal and basic screening, we utilized a Natural Language Processing toolkit. Then, we performed a synthesis of the existing body of knowledge and identified the different studies that achieved significant advancements in sign language recognition, visualization, and synthesis. The identified trends based on analysis of almost 2000 papers clearly show that technology developments, especially in image processing and deep learning, are driving new applications and tools that improve the various performance metrics in these sign language-related task. Finally, we identified which techniques and devices contribute to such results and what are the common threads and gaps that would open new research directions in the field.
引用
收藏
页码:40979 / 40998
页数:20
相关论文
共 107 条
[1]   Machine learning methods for sign language recognition: A critical review and analysis [J].
Adeyanju, I. A. ;
Bello, O. O. ;
Adegboye, M. A. .
INTELLIGENT SYSTEMS WITH APPLICATIONS, 2021, 12
[2]   Real-time sign language framework based on wearable device: analysis of MSL, DataGlove, and gesture recognition [J].
Ahmed, M. A. ;
Zaidan, B. B. ;
Zaidan, A. A. ;
Alamoodi, A. H. ;
Albahri, O. S. ;
Al-Qaysi, Z. T. ;
Albahri, A. S. ;
Salih, Mahmood M. .
SOFT COMPUTING, 2021, 25 (16) :11101-11122
[3]   Based on wearable sensory device in 3D-printed humanoid: A new real-time sign language recognition system [J].
Ahmed, M. A. ;
Zaidan, B. B. ;
Zaidan, A. A. ;
Salih, Mahmood M. ;
Al-qaysi, Z. T. ;
Alamoodi, A. H. .
MEASUREMENT, 2021, 168
[4]   A Review on Systems-Based Sensory Gloves for Sign Language Recognition State of the Art between 2007 and 2017 [J].
Ahmed, Mohamed Aktham ;
Zaidan, Bilal Bahaa ;
Zaidan, Aws Alaa ;
Salih, Mahmood Maher ;
Bin Lakulu, Muhammad Modi .
SENSORS, 2018, 18 (07)
[5]   A pattern recognition model for static gestures in malaysian sign language based on machine learning techniques [J].
Alrubayi, Ali H. ;
Ahmed, M. A. ;
Zaidan, A. A. ;
Albahri, A. S. ;
Zaidan, B. B. ;
Albahri, O. S. ;
Alamoodi, A. H. ;
Alazab, Mamoun .
COMPUTERS & ELECTRICAL ENGINEERING, 2021, 95
[6]  
[Anonymous], 2013, INT C ADV COMPUTING
[7]   Nearest neighbour classification of Indian sign language gestures using kinect camera [J].
Ansari, Zafar Ahmed ;
Harit, Gaurav .
SADHANA-ACADEMY PROCEEDINGS IN ENGINEERING SCIENCES, 2016, 41 (02) :161-182
[8]  
Arksey H., 2005, INT J SOC RES METHOD, V8, P19, DOI [DOI 10.1080/1364557032000119616, 10.1080/1364557032000119616]
[9]   Hemispheric specialization for English and ASL: left invariance right variability [J].
Bavelier, D ;
Corina, D ;
Jezzard, P ;
Clark, V ;
Karni, A ;
Lalwani, A ;
Rauschecker, JP ;
Braun, A ;
Turner, R ;
Neville, HJ .
NEUROREPORT, 1998, 9 (07) :1537-1542
[10]  
Bedregal BC, 2006, INT FED INFO PROC, V217, P285