Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool

被引:18
作者
Liu, Feng [1 ]
Xiang, Tao [2 ]
Hospedales, Timothy M. [3 ]
Yang, Wankou [1 ]
Sun, Changyin [1 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[2] Queen Mary Univ London, Sch Elect Engn & Comp Sci, Comp Vis & Multimedia, London E1 4NS, England
[3] Univ Edinburgh, Sch Informat, IPAB, 10 Crichton St, Edinburgh EH8 9AB, Midlothian, Scotland
关键词
Benchmark testing; Visualization; Predictive models; Analytical models; Image color analysis; Knowledge discovery; Task analysis; Inverse visual question answering; VQA visualisation; visuo-linguistic understanding; reinforcement learning; NETWORKS;
D O I
10.1109/TPAMI.2018.2880185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, visual question answering (VQA) has become topical. The premise of VQA's significance as a benchmark in AI, is that both the image and textual question need to be well understood and mutually grounded in order to infer the correct answer. However, current VQA models perhaps 'understand' less than initially hoped, and instead master the easier task of exploiting cues given away in the question and biases in the answer distribution [1] . In this paper we propose the inverse problem of VQA (iVQA). The iVQA task is to generate a question that corresponds to a given image and answer pair. We propose a variational iVQA model that can generate diverse, grammatically correct and content correlated questions that match the given answer. Based on this model, we show that iVQA is an interesting benchmark for visuo-linguistic understanding, and a more challenging alternative to VQA because an iVQA model needs to understand the image better to be successful. As a second contribution, we show how to use iVQA in a novel reinforcement learning framework to diagnose any existing VQA model by way of exposing its belief set: the set of question-answer pairs that the VQA model would predict true for a given image. This provides a completely new window into what VQA models 'believe' about images. We show that existing VQA models have more erroneous beliefs than previously thought, revealing their intrinsic weaknesses. Suggestions are then made on how to address these weaknesses going forward.
引用
收藏
页码:460 / 474
页数:15
相关论文
共 61 条
[1]   Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering [J].
Agrawal, Aishwarya ;
Batra, Dhruv ;
Parikh, Devi ;
Kembhavi, Aniruddha .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4971-4980
[2]   VQA: Visual Question Answering [J].
Agrawal, Aishwarya ;
Lu, Jiasen ;
Antol, Stanislaw ;
Mitchell, Margaret ;
Zitnick, C. Lawrence ;
Parikh, Devi ;
Batra, Dhruv .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) :4-31
[3]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[4]   Neural Module Networks [J].
Andreas, Jacob ;
Rohrbach, Marcus ;
Darrell, Trevor ;
Klein, Dan .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :39-48
[5]  
[Anonymous], 2018, P INT C LEARN REPR
[6]  
[Anonymous], 2016, C EMPIRICAL METHODS, DOI [10.18653/v1/D16-1203, DOI 10.18653/V1/D16-1203]
[7]  
[Anonymous], 2017, P INT C LEARN REPR
[8]  
[Anonymous], ABS171101732 CORR
[9]  
[Anonymous], 2016, ARXIV160808974
[10]  
[Anonymous], 2014, PROC 2 INT C LEARN R