MUTAN: Multimodal Tucker Fusion for Visual Question Answering

被引:389
作者
Ben-younes, Hedi [1 ,2 ]
Cadene, Remi [1 ]
Cord, Matthieu [1 ]
Thome, Nicolas [3 ]
机构
[1] UPMC, Sorbonne Univ, CNRS, LIP6, Paris, France
[2] Heuritech, Paris, France
[3] CNAM, Paris, France
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
关键词
D O I
10.1109/ICCV.2017.285
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.
引用
收藏
页码:2631 / 2639
页数:9
相关论文
共 31 条
[1]  
[Anonymous], 2016, VISUAL GENOME CONNEC
[2]  
[Anonymous], 2015, ICCV
[3]  
[Anonymous], 2014, P 8 WORKSH SYNT SEM
[4]  
[Anonymous], 2017, 5 INT C LEARN REPR
[5]  
[Anonymous], 2016, C EMP METH NAT LANG
[6]  
[Anonymous], 2016, P 2016 C N AM CHAPT, DOI DOI 10.18653/V1/N16-1181
[7]  
[Anonymous], 2016, CVPR
[8]  
[Anonymous], 2016, ARXIV160502697
[9]  
[Anonymous], 2015, CVPR
[10]  
[Anonymous], 2016, CVPR