Question action relevance and editing for visual question answering

被引:11
作者
Toor, Andeep S. [1 ]
Wechsler, Harry [1 ]
Nappi, Michele [2 ]
机构
[1] George Mason Univ, Dept Comp Sci, Fairfax, VA 22030 USA
[2] Univ Salerno, Dipartimento Informat, Fisciano, Italy
关键词
Computer vision; Visual question answering; Deep learning; Action recognition; Image understanding; Question relevance;
D O I
10.1007/s11042-018-6097-z
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual Question Answering (VQA) expands on the Turing Test, as it involves the ability to answer questions about visual content. Current efforts in VQA, however, still do not fully consider whether a question about visual content is relevant and if it is not, how to edit it best to make it answerable. Question relevance has only been considered so far at the level of a whole question using binary classification and without the capability to edit a question to make it grounded and intelligible. The only exception to this is our prior research effort into question part relevance that allows for relevance and editing based on object nouns. This paper extends previous work on object relevance to determine the relevance for a question action and leverage this capability to edit an irrelevant question to make it relevant. Practical applications of such a capability include answering biometric-related queries across a set of images, including people and their action (behavioral biometrics). The feasibility of our approach is shown using Context-Collaborative VQA (C2VQA) Action/Relevance/Edit (ARE). Our results show that our proposed approach outperforms all other models for the novel tasks of question action relevance (QAR) and question action editing (QAE) by a significant margin. The ultimate goal for future research is to address full-fledged W5 + type of inquires (What, Where, When, Why, Who, and How) that are grounded to and reference video using both nouns and verbs in a collaborative context-aware fashion.
引用
收藏
页码:2921 / 2935
页数:15
相关论文
共 22 条
  • [1] [Anonymous], 2017, Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing
  • [2] [Anonymous], 2016, ARXIV161108481
  • [3] [Anonymous], 2015, Arxiv.Org, DOI DOI 10.3389/FPSYG.2013.00124
  • [4] [Anonymous], 2015, P BRIT MACHINE VISIO
  • [5] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [6] Bansal M., 2016, P 2016 C EMP METH NA
  • [7] Visual Turing test for computer vision systems
    Geman, Donald
    Geman, Stuart
    Hallonquist, Neil
    Younes, Laurent
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2015, 112 (12) : 3618 - 3623
  • [8] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [9] LSTM: A Search Space Odyssey
    Greff, Klaus
    Srivastava, Rupesh K.
    Koutnik, Jan
    Steunebrink, Bas R.
    Schmidhuber, Juergen
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) : 2222 - 2232
  • [10] Huang Zhiheng, 2015, CoRR