Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?

被引:195
作者
Das, Abhishek [1 ]
Agrawal, Harsh [2 ]
Zitnick, Larry [3 ]
Parikh, Devi [1 ,3 ]
Batra, Dhruv [1 ,3 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Virginia Tech, Blacksburg, VA 24061 USA
[3] Facebook AI Res, Menlo Pk, CA USA
基金
美国国家科学基金会;
关键词
Visual Question Answering; Attention;
D O I
10.1016/j.cviu.2017.10.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We conduct large-scale studies on 'human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans. Finally, we train VQA models with explicit attention supervision, and find that it improves VQA performance.
引用
收藏
页码:90 / 100
页数:11
相关论文
共 41 条
  • [1] [Anonymous], 2015, ICLR
  • [2] [Anonymous], 2015, INT C LEARN REPR ICL
  • [3] [Anonymous], 2016, NAACL HLT
  • [4] [Anonymous], Simple baseline for visual question answering
  • [5] [Anonymous], 2015, CVPR
  • [6] [Anonymous], 2016, ARXIV160805442
  • [7] [Anonymous], 2016, P INT C MACHINE LEAR
  • [8] [Anonymous], 2014, Neural Information Processing Systems
  • [9] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [10] Cho Kyunghyun, 2015, IEEE T MULTIMEDIA