ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation

被引:0
作者
Chen, Yangyi [1 ]
Wang, Xingyao [1 ]
Lie, Manling [2 ]
Hoiem, Derek [1 ]
Ji, Heng [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
[2] Northwestern Univ, Evanston, IL USA
来源
2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023) | 2023年
关键词
LANGUAGE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art vision-language models (VLMs) still have limited performance in structural knowledge extraction, such as relations between objects. In this work, we present ViStruct, a training framework to learn VLMs for effective visual structural knowledge extraction. Two novel designs are incorporated. First, we propose to leverage the inherent structure of programming language to depict visual structural information. This approach enables explicit and consistent representation of visual structural information of multiple granularities, such as concepts, relations, and events, in a well-organized structured format. Second, we introduce curriculum-based learning for VLMs to progressively comprehend visual structures, from fundamental visual concepts to intricate event structures. Our intuition is that lower-level knowledge may contribute to complex visual structure understanding. Furthermore, we compile and release a collection of datasets tailored for visual structural knowledge extraction. We adopt a weakly-supervised approach to directly generate visual event structures from captions for ViStruct training, capitalizing on abundant image-caption pairs from the web. In experiments, we evaluate ViStruct on visual structure prediction tasks, demonstrating its effectiveness in improving the understanding of visual structures. The code is public at https://github.com/Yangyi-Chen/vi-struct.
引用
收藏
页码:13342 / 13357
页数:16
相关论文
共 92 条
  • [31] Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
    Krishna, Ranjay
    Zhu, Yuke
    Groth, Oliver
    Johnson, Justin
    Hata, Kenji
    Kravitz, Joshua
    Chen, Stephanie
    Kalantidis, Yannis
    Li, Li-Jia
    Shamma, David A.
    Bernstein, Michael S.
    Li Fei-Fei
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) : 32 - 73
  • [32] Krishna Ranjay, 2016, arXiv
  • [33] Kumar M., 2010, P ADV NEUR INF PROC, V23, P1189
  • [34] Li G, 2020, AAAI CONF ARTIF INTE, V34, P11336
  • [35] Li JH, 2021, ADV NEUR IN, V34
  • [36] Li JN, 2022, PR MACH LEARN RES
  • [37] Li Junnan, 2023, arXiv
  • [38] Li L, 2019, PR MACH LEARN RES, V115, P367
  • [39] Relation-Aware Graph Attention Network for Visual Question Answering
    Li, Linjie
    Gan, Zhe
    Cheng, Yu
    Liu, Jingjing
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 10312 - 10321
  • [40] Li LH, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P5339