Visual Dialog

被引:453
作者
Das, Abhishek [1 ]
Kottur, Satwik [2 ]
Gupta, Khushi [2 ]
Singh, Avi [3 ]
Yadav, Deshraj [4 ]
Moura, Jose M. F. [2 ]
Parikh, Devi [1 ]
Batra, Dhruv [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Calif Berkeley, Berkeley, CA USA
[4] Virginia Tech, Blacksburg, VA USA
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR.2017.121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial contains 1 dialog (10 question-answer pairs) on similar to 140k images from the COCO dataset, with a total of similar to 1.4M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders (Late Fusion, Hierarchical Recurrent Encoder and Memory Network) and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Our dataset, code, and trained models will be released publicly at visualdialog.org. Putting it all together, we demonstrate the first 'visual chatbot'!
引用
收藏
页码:1080 / 1089
页数:10
相关论文
共 61 条
  • [1] Agrawal Harsh., 2016, EMNLP
  • [2] [Anonymous], 2016, ARXIV160506069
  • [3] [Anonymous], AI MAGAZINE
  • [4] [Anonymous], 2015, Large-scale simple question answering with memory networks
  • [5] [Anonymous], 2016, ICLR
  • [6] [Anonymous], 2016, NAACL HLT
  • [7] [Anonymous], 2016, ACL
  • [8] [Anonymous], 2001, P WORKSH EV LANG DIA
  • [9] [Anonymous], 2014, PNAS
  • [10] [Anonymous], EMNLP