Unsupervised Multimodal Machine Translation for Low-resource Distant Language Pairs

被引:24
作者
Tayir, Turghun [1 ]
Li, Lin [1 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Artificial Intelligence, Luoshi Rd 122, Wuhan 430070, Peoples R China
关键词
Visual masked language modeling; unsupervised machine translation; distant language pair; image feature;
D O I
10.1145/3652161
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised machine translation (UMT) has recently attracted more attention from researchers, enabling models to translate when languages lack parallel corpora. However, the current works mainly consider close language pairs (e.g., English-German and English-French), and the effectiveness of visual content for distant language pairs has yet to be investigated. This article proposes an unsupervised multimodal machine translation model for low-resource distant language pairs. Specifically, we first employ adequate measures such as transliteration and re-ordering to bring distant language pairs closer together. We then use visual content to extend masked language modeling and generate visual masked language modeling for UMT. Finally, empirical experiments are conducted on our distant language pair dataset and the public Multi30k dataset. Experimental results demonstrate the superior performance of our model, with BLEU score improvements of 2.5 and 2.6 on translation for distant language pairs English-Uyghur and Chinese-Uyghur. Moreover, our model also brings remarkable results for close language pairs, improving 2.3 BLEU compared with the existing models in English-German.
引用
收藏
页数:22
相关论文
共 56 条
[1]  
Artetxe Mikel, 2018, P 6 INT C LEARN REPR, DOI DOI 10.18653/V1/D18-1399
[2]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[3]  
Caglayan Ozan, 2017, Prague Bulletin of Mathematical Linguistics, P15, DOI 10.1515/pralin-2017-0035
[4]  
Caglayan O, 2021, 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), P1317
[5]  
Chang P.C., 2008, P 3 WORKSHOP STAT MA, P224
[6]  
Chen SZ, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4932
[7]  
Chen Y, 2018, AAAI CONF ARTIF INTE, P5086
[8]  
Cheng Y, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3974
[9]  
Cho K, 2014, P WORKSH SYNT SEM ST, DOI [10.3115/v1/w14-4012, 10.3115/v1/W14-4012]
[10]  
Conneau A, 2019, ADV NEUR IN, V32