Learning Dual Encoding Model for Adaptive Visual Understanding in Visual Dialogue

被引:22
作者
Yu, Jing [1 ,2 ]
Jiang, Xiaoze [3 ]
Qin, Zengchang [3 ]
Zhang, Weifeng [4 ]
Hu, Yue [1 ,2 ]
Wu, Qi [5 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Beihang Univ, Sch ASEE, Intelligent Comp & Machine Learning Lab, Beijing 100191, Peoples R China
[4] Jiaxing Univ, Coll Math Phys & Informat Engn, Jiaxing 314001, Peoples R China
[5] Univ Adelaide, Australian Ctr Robot Vis, Adelaide, SA 5005, Australia
基金
中国国家自然科学基金;
关键词
Visualization; Semantics; History; Task analysis; Cognition; Feature extraction; Adaptation models; Dual encoding; visual module; semantic module; visual relationship; dense caption; visual dialogue;
D O I
10.1109/TIP.2020.3034494
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Different from Visual Question Answering task that requires to answer only one question about an image, Visual Dialogue task involves multiple rounds of dialogues which cover a broad range of visual content that could be related to any objects, relationships or high-level semantics. Thus one of the key challenges in Visual Dialogue task is to learn a more comprehensive and semantic-rich image representation that can adaptively attend to the visual content referred by variant questions. In this paper, we first propose a novel scheme to depict an image from both visual and semantic views. Specifically, the visual view aims to capture the appearance-level information in an image, including objects and their visual relationships, while the semantic view enables the agent to understand high-level visual semantics from the whole image to the local regions. Furthermore, on top of such dual-view image representations, we propose a Dual Encoding Visual Dialogue (DualVD) module, which is able to adaptively select question-relevant information from the visual and semantic views in a hierarchical mode. To demonstrate the effectiveness of DualVD, we propose two novel visual dialogue models by applying it to the Late Fusion framework and Memory Network framework. The proposed models achieve state-of-the-art results on three benchmark datasets. A critical advantage of the DualVD module lies in its interpretability. We can analyze which modality (visual or semantic) has more contribution in answering the current question by explicitly visualizing the gate values. It gives us insights in understanding of information selection mode in the Visual Dialogue task. The code is available at https://github.com/JXZe/Learning_DualVD.
引用
收藏
页码:220 / 233
页数:14
相关论文
共 46 条
  • [1] VQA: Visual Question Answering
    Agrawal, Aishwarya
    Lu, Jiasen
    Antol, Stanislaw
    Mitchell, Margaret
    Zitnick, C. Lawrence
    Parikh, Devi
    Batra, Dhruv
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) : 4 - 31
  • [2] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
    Anderson, Peter
    He, Xiaodong
    Buehler, Chris
    Teney, Damien
    Johnson, Mark
    Gould, Stephen
    Zhang, Lei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6077 - 6086
  • [3] Chen FL, 2020, AAAI CONF ARTIF INTE, V34, P7504
  • [4] Chowdhury M. I. Hasan, P 25 IEEE INT C IM P, P289
  • [5] Visual Dialog
    Das, Abhishek
    Kottur, Satwik
    Gupta, Khushi
    Singh, Avi
    Yadav, Deshraj
    Moura, Jose M. F.
    Parikh, Devi
    Batra, Dhruv
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1080 - 1089
  • [6] Donahue J, 2015, PROC CVPR IEEE, P2625, DOI 10.1109/CVPR.2015.7298878
  • [7] Faghri F., 2018, PROC BRIT MACHINE VI, P2
  • [8] Fang H, 2015, PROC CVPR IEEE, P1473, DOI 10.1109/CVPR.2015.7298754
  • [9] Gan Z, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P6463
  • [10] Multi-class segmentation with relative location prior
    Gould, Stephen
    Rodgers, Jim
    Cohen, David
    Elidan, Gal
    Koller, Daphne
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2008, 80 (03) : 300 - 316