Dynamic Context-guided Capsule Network for Multimodal Machine Translation

被引:52
作者
Lin, Huan [1 ]
Meng, Fandong [2 ]
Su, Jinsong [1 ]
Yin, Yongjing [1 ]
Yang, Zhengyuan [3 ]
Ge, Yubin [4 ]
Zhou, Jie [2 ]
Luo, Jiebo [3 ]
机构
[1] Xiamen Univ, Xiamen, Peoples R China
[2] Tencent WeChat AI, Shenzhen, Peoples R China
[3] Univ Rochester, Rochester, NY 14627 USA
[4] Univ Illinois, Champaign, IL USA
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Multimodal Machine Translation; Capsule Network; Transformer;
D O I
10.1145/3394171.3413715
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal machine translation (MMT), which mainly focuses on enhancing text-only translation with visual features, has attracted considerable attention from both computer vision and natural language processing communities. Most current MMT models resort to attention mechanism, global context modeling or multimodal joint representation learning to utilize visual features. However, the attention mechanism lacks sufficient semantic interactions between modalities while the other two provide fixed visual context, which is unsuitable for modeling the observed variability when generating translation. To address the above issues, in this paper, we propose a novel Dynamic Context-guided Capsule Network (DCCN) for MMT. Specifically, at each timestep of decoding, we first employ the conventional source-target attention to produce a timestep-specific source-side context vector. Next, DCCN takes this vector as input and uses it to guide the iterative extraction of related visual features via a context-guided dynamic routing mechanism. Particularly, we represent the input image with global and regional visual features, we introduce two parallel DCCNs to model multimodal context vectors with visual features at different granularities. Finally, we obtain two multimodal context vectors, which are fused and incorporated into the decoder for the prediction of the target word. Experimental results on the Multi30K dataset of English-to-German and English-to-French translation demonstrate the superiority of DCCN.
引用
收藏
页码:1320 / 1329
页数:10
相关论文
共 64 条
[1]  
Aly R, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019:): STUDENT RESEARCH WORKSHOP, P323
[2]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[3]  
[Anonymous], 2018, P 2018 C EMP METH NA
[4]  
[Anonymous], 2017, P 2017 ACM MULT C, DOI DOI 10.1145/3123266.3123275
[5]  
[Anonymous], 2017, ABS170308084 CORR
[6]  
Arslan H. S., 2018, CORR
[7]  
Bahdanau Dzmitry, 2015, 3 INT C LEARN REPR S, P432
[8]  
Barrault L., 2018, P 3 C MACH TRANSL SH, P304
[9]  
Caglayan O., 2017, P 2 C MACH TRANSL AS, P432, DOI DOI 10.18653/V1/W17-4746
[10]  
Caglayan Ozan, 2016, ARXIV160903976