Contrastive Adversarial Training for Multi-Modal Machine Translation

被引:2
作者
Huang, Xin [1 ]
Zhang, Jiajun [1 ]
Zong, Chengqing [1 ]
机构
[1] Univ Chinese Acad Sci, Chinese Acad Sci, Sch Artificial Intelligence, Natl Lab Pattern Recognit,Inst Automat, Intelligence Bldg,95 Zhongguancun East Rd, Beijing 100190, Peoples R China
关键词
Contrastive Learning; adversarial training; multi-modal machine translation;
D O I
10.1145/3587267
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The multi-modal machine translation task is to improve translation quality with the help of additional visual input. It is expected to disambiguate or complement semantics while there are ambiguous words or incomplete expressions in the sentences. Existing methods have tried many ways to fuse visual information into text representations. However, only a minority of sentences need extra visual information as complementary. Without guidance, models tend to learn text-only translation from the major well-aligned translation pairs. In this article, we propose a contrastive adversarial training approach to enhance visual participation in semantic representation learning. By contrasting multi-modal input with the adversarial samples, the model learns to identify the most informed sample that is coupled with a congruent image and several visual objects extracted from it. This approach can prevent the visual information from being ignored and further fuse cross-modal information. We examine our method in three multi-modal language pairs. Experimental results show that our model is capable of improving translation accuracy. Further analysis shows that our model is more sensitive to visual information.
引用
收藏
页数:18
相关论文
共 53 条
[21]  
Ive J, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P6525
[22]  
Jaiswal Ashish, 2020, ABS201100362 CORR
[23]  
Khosla Prannay, 2020, CORR
[24]  
Kim M, 2020, ADV NEUR IN, V33
[25]  
Kim W, 2021, PR MACH LEARN RES, V139
[26]  
Kiros Ryan., 2014, ABS14112539 CORR
[27]  
Klein Tassilo, 2020, P 58 ANN M ASS COMPU, P7517, DOI [DOI 10.18653/V1/2020.ACL-MAIN.671, 10.18653/v1/, DOI 10.18653/V1]
[28]   Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations [J].
Krishna, Ranjay ;
Zhu, Yuke ;
Groth, Oliver ;
Johnson, Justin ;
Hata, Kenji ;
Kravitz, Joshua ;
Chen, Stephanie ;
Kalantidis, Yannis ;
Li, Li-Jia ;
Shamma, David A. ;
Bernstein, Michael S. ;
Li Fei-Fei .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) :32-73
[29]  
Li B, 2022, PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), P6327
[30]  
Li J, 2017, P 2017 C EMP METH NA, P2157, DOI [10.18653/v1/D17-1230, DOI 10.1021/ACSANM.8B01974]