A Study on the Use of Sequence-to-Sequence Neural Networks for Automatic Translation of Brazilian Portuguese to LIBRAS

被引:3
|
作者
Verissimo, Vinicius [1 ]
Silva, Cecilia [1 ]
Hanael, Vitor [1 ]
Moraes, Caio [1 ]
Costa, Rostand [1 ]
Maritan, Tiago [1 ]
Aschoff, Manuella [1 ]
Gaudencio, Thais [1 ]
机构
[1] LAVID UFPB, Joao Pessoa, Paraiba, Brazil
来源
WEBMEDIA 2019: PROCEEDINGS OF THE 25TH BRAZILLIAN SYMPOSIUM ON MULTIMEDIA AND THE WEB | 2019年
关键词
machine translation; neural networks; deep learning; accessibility; sign language; DESIGN;
D O I
10.1145/3323503.3360292
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The World Health Organization estimates that approximately 466 million people have some level of hearing loss. This significant portion of the world population faces several challenges in accessing information. The main problem is that the languages in which the deaf community can perceive and produce in a natural way are sign languages (SL). An alternative to dealing with this would be the translation of the content from an oral language to SL. However, when it comes to accessing online content, it is necessary to consider translating SL not only for audio or video content, but also for more complex text on websites. This is already a difficult task by itself for the volume involved, and it also addresses some additional challenges, related to the high cost of human interpreter service and the great dynamism of Internet content. In this context, one of the most promising approaches to such scenarios is the use of machine translation applications from oral to sign language. This work evaluates the use of neural network models usually used in natural language processing for the production of LIBRAS glosses from texts in Portuguese. Using a 2(k) factorial experiment design, we evaluated the impact of several aspects such as database size, types of models and training parameters in the quality of automatic translation obtained. The results of the experiments were very promising and point to an initial superiority of the LightConv model in most of the evaluated scenarios.
引用
收藏
页码:101 / 108
页数:8
相关论文
共 50 条
  • [1] Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
    Wang, Wenxuan
    Jiao, Wenxiang
    Hao, Yongchang
    Wang, Xing
    Shi, Shuming
    Tu, Zhaopeng
    Lyu, Michael R.
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2591 - 2600
  • [2] Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition
    Yook, Dongsuk
    Lim, Dan
    Yoo, In-Chul
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2020, 39 (05): : 476 - 482
  • [3] De-duping URLs with Sequence-to-Sequence Neural Networks
    Xu, Keyang
    Liu, Zhengzhong
    Callan, Jamie
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 1157 - 1160
  • [4] FPGA implementation of sequence-to-sequence predicting spiking neural networks
    Ye, ChangMin
    Kornijcuk, Vladimir
    Kim, Jeeson
    Jeong, Doo Seok
    2020 17TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC 2020), 2020, : 322 - 323
  • [5] In-Image Neural Machine Translation with Segmented Pixel Sequence-to-Sequence Model
    Tian, Yanzhi
    Li, Xiang
    Liu, Zeming
    Guo, Yuhang
    Wang, Bin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15046 - 15057
  • [6] Sequence-to-Sequence Models for Emphasis Speech Translation
    Quoc Truong Do
    Sakti, Sakriani
    Nakamura, Satoshi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (10) : 1873 - 1883
  • [7] Data2Vis: Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks
    Dibia, Victor
    Demiralp, Cagatay
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2019, 39 (05) : 33 - 46
  • [8] Deep Sequence-to-Sequence Neural Networks for Ionospheric Activity Map Prediction
    Cherrier, Noelie
    Castaings, Thibaut
    Boulch, Alexandre
    NEURAL INFORMATION PROCESSING, ICONIP 2017, PT V, 2017, 10638 : 545 - 555
  • [9] Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation
    Guo, Junliang
    Xu, Linli
    Chen, Enhong
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 376 - 385
  • [10] Turkish Data-to-Text Generation Using Sequence-to-Sequence Neural Networks
    Demir, Seniz
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (02)