Enhancing the Applicability of Sign Language Translation

被引:0
作者
Li, Jiao [1 ,2 ]
Xu, Jiakai [1 ,3 ]
Liu, Yang [4 ]
Xu, Weitao [2 ]
Li, Zhenjiang [2 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[3] Columbia Univ City New York, Dept Comp Sci, New York, NY 10027 USA
[4] Univ Cambridge, Dept Comp Sci & Technol, Cambridge CB21TN, England
关键词
Sensors; Assistive technologies; Gesture recognition; Libraries; Semantics; Computer science; Urban areas; Mobile computing; sign language translation; wearable sensing; RECOGNITION;
D O I
10.1109/TMC.2024.3350111
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses a significant problem in American Sign Language (ASL) translation systems that has been overlooked. Current designs collect excessive sensing data for each word and treat every sentence as new, requiring the collection of sensing data from scratch. This approach is time-consuming, taking hours to half a day to complete the data collection process for each user. As a result, it creates an unnecessary burden on end-users and hinders the widespread adoption of ASL systems. In this study, we identify the root cause of this issue and propose GASLA-a wearable sensor-based solution that automatically generates sentence-level sensing data from word-level data. An acceleration approach is further proposed to optimize the data generation speed. Moreover, due to the gap between the generated sentence data and directly collected sentence data, a template strategy is proposed to make the generated sentences more similar to the collected sentence. The generated data can be used to train ASL systems effectively while reducing overhead costs significantly. GASLA offers several benefits over current approaches: it reduces initial setup time and future new-sentence addition overhead; it requires only two samples per sentence compared to around ten samples in current systems; and it improves overall performance significantly.
引用
收藏
页码:8634 / 8648
页数:15
相关论文
共 53 条
  • [1] About American Sign Language (ASL) University, About us
  • [2] DF-WiSLR: Device-Free Wi-Fi-based Sign Language Recognition
    Ahmed, Hasmath Farhana Thariq
    Ahmad, Hafisoh
    Narasingamurthi, Kulasekharan
    Harkat, Houda
    Phang, Swee King
    [J]. PERVASIVE AND MOBILE COMPUTING, 2020, 69 (69)
  • [3] Al-Naffakh N., 2018, IAICT, P15, DOI DOI 10.1007/978-3-319-95276-52
  • [4] android, Android sensor event
  • [5] Bantupalli K, 2018, IEEE INT CONF BIG DA, P4896, DOI 10.1109/BigData.2018.8622141
  • [6] SmartGe: Identifying Pen-Holding Gesture With Smartwatch
    Bi, Hongliang
    Zhang, Jian
    Chen, Yanjiao
    [J]. IEEE ACCESS, 2020, 8 (08): : 28820 - 28830
  • [7] Cao YT, 2020, IEEE INFOCOM SER, P1917, DOI [10.1109/infocom41043.2020.9155380, 10.1109/INFOCOM41043.2020.9155380]
  • [8] Chenning Li, 2020, SenSys '20: Proceedings of the 18th Conference on Embedded Networked Sensor Systems, P436, DOI 10.1145/3384419.3430725
  • [9] Dong Ma, 2021, MobiSys '21: Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, P175, DOI 10.1145/3458864.3467680
  • [10] DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
    Fang, Biyi
    Co, Jillian
    Zhang, Mi
    [J]. PROCEEDINGS OF THE 15TH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS'17), 2017,