Visual Question Reasoning on General Dependency Tree

被引:20
作者
Cao, Qingxing [1 ]
Liang, Xiaodan [1 ]
Li, Bailin [1 ]
Li, Guanbin [1 ]
Lin, Liang [1 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Guangdong, Peoples R China
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2018.00757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The collaborative reasoning for understanding each image-question pair is very critical but under-explored for an interpretable Visual Question Answering (VQA) system. Although very recent works also tried the explicit compositional processes to assemble multiple sub-tasks embedded in the questions, their models heavily rely on the annotations or hand-crafted rules to obtain valid reasoning layout, leading to either heavy labor or poor performance on composition reasoning. In this paper to enable global context reasoning for better aligning image and language domains in diverse and unrestricted cases, we propose a novel reasoning network called Adversarial Composition Modular Network (ACMN). This network comprises of two collaborative modules: i) an adversarial attention module to exploit the local visual evidence for each word parsed from the question; ii) a residual composition module to compose the previous mined evidence. Given a dependency parse tree for each question, the adversarial attention module progressively discovers salient regions of one word by densely combining regions of child word nodes in an adversarial manner. Then residual composition module merges the hidden representations of an arbitrary number of children through sum pooling and residual connection. Our ACMN is thus capable of building an interpretable VQA system that gradually dives the image cues following a question-driven reasoning route and makes global reasoning by incorporating the learned knowledge of all attention modules in a principled manner. Experiments on relational datasets demonstrate the superiority of our ACMN and visualization results show the explainable capability of our reasoning system. [GRAPHICS]
引用
收藏
页码:7249 / 7257
页数:9
相关论文
共 35 条
  • [1] Neural Module Networks
    Andreas, Jacob
    Rohrbach, Marcus
    Darrell, Trevor
    Klein, Dan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 39 - 48
  • [2] [Anonymous], ASK ATTEND ANSWER EX
  • [3] [Anonymous], PROC CVPR IEEE
  • [4] [Anonymous], 2016, P 2016 C N AM CHAPT, DOI DOI 10.18653/V1/N16-1181
  • [5] [Anonymous], ICCV
  • [6] [Anonymous], 2017, P ICCV
  • [7] [Anonymous], 2014, EMNLP
  • [8] [Anonymous], 2017, CORR
  • [9] [Anonymous], 2016, CVPR
  • [10] [Anonymous], 2017, CVPR