Reasoning on the Relation: Enhancing Visual Representation for Visual Question Answering and Cross-Modal Retrieval

被引:71
作者
Yu, Jing [1 ,2 ]
Zhang, Weifeng [3 ]
Lu, Yuhang [4 ]
Qin, Zengchang [5 ]
Hu, Yue [1 ,2 ]
Tan, Jianlong [1 ,2 ]
Wu, Qi [6 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100093, Peoples R China
[3] Jiaxing Univ, Coll Math Phys & Informat Engn, Jiaxing 314000, Peoples R China
[4] Alibaba Grp, Hangzhou 310052, Peoples R China
[5] Beihang Univ, Sch ASEE, Intelligent Comp & Machine Learning Lab, Beijing 100191, Peoples R China
[6] Univ Adelaide, Australian Ctr Robot Vis, Adelaide, SA 5005, Australia
关键词
Visualization; Cognition; Task analysis; Knowledge discovery; Semantics; Correlation; Information retrieval; Visual relational reasoning; visual attention; visual question answering; cross-modal information retrieval;
D O I
10.1109/TMM.2020.2972830
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-modal analysis has become a promising direction for artificial intelligence. Visual representation is crucial for various cross-modal analysis tasks that require visual content understanding. Visual features which contain semantical information can disentangle the underlying correlation between different modalities, thus benefiting the downstream tasks. In this paper, we propose a Visual Reasoning and Attention Network (VRANet) as a plug-and-play module to capture rich visual semantics and help to enhance the visual representation for improving cross-modal analysis. Our proposed VRANet is built based on the bilinear visual attention module which identifies the critical objects. We propose a novel Visual Relational Reasoning (VRR) module to reason about pair-wise and inner-group visual relationships among objects guided by the textual information. The two modules enhance the visual features at both relation level and object level. We demonstrate the effectiveness of the proposed VRANet by applying it to both Visual Question Answering (VQA) and Cross-Modal Information Retrieval (CMIR) tasks. Extensive experiments conducted on VQA 2.0, CLEVR, CMPlaces, and MS-COCO datasets indicate superior performance comparing with state-of-the-art work.
引用
收藏
页码:3196 / 3209
页数:14
相关论文
共 68 条
  • [1] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
    Anderson, Peter
    He, Xiaodong
    Buehler, Chris
    Teney, Damien
    Johnson, Mark
    Gould, Stephen
    Zhang, Lei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6077 - 6086
  • [2] [Anonymous], 2018, IEEE T NEUR NET LEAR, DOI DOI 10.1109/TNNLS.2018.2817340
  • [3] [Anonymous], 2016, P INT C LEARN REPR
  • [4] [Anonymous], 2015, P 3 INT C LEARN REPR
  • [5] [Anonymous], 2017, INT C LEARNING REPRE
  • [6] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [7] Bahdanau D., 2014, 3 INT C LEARN REPR
  • [8] Bai S., 2018, ARXIV
  • [9] Visual Question Reasoning on General Dependency Tree
    Cao, Qingxing
    Liang, Xiaodan
    Li, Bailin
    Li, Guanbin
    Lin, Liang
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7249 - 7257
  • [10] Learning Aligned Cross-Modal Representations from Weakly Aligned Data
    Castrejon, Lluis
    Aytar, Yusuf
    Vondrick, Carl
    Pirsiavash, Hamed
    Torralba, Antonio
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2940 - 2949