Correlation Information Bottleneck: Towards Adapting Pretrained Multimodal Models for Robust Visual Question Answering

被引:2
作者
Jiang, Jingjing [1 ]
Liu, Ziyi [1 ]
Zheng, Nanning [1 ]
机构
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Shaanxi, Peoples R China
基金
美国国家科学基金会;
关键词
Information bottleneck; Robustness; Visual question answering; Vision-language model; LANGUAGE;
D O I
10.1007/s11263-023-01858-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Benefiting from large-scale pretrained vision language models (VLMs), the performance of visual question answering (VQA) has approached human oracles. However, finetuning such models on limited data often suffers from overfitting and poor generalization issues, leading to a lack of model robustness. In this paper, we aim to improve input robustness from an information bottleneck perspective when adapting pretrained VLMs to the downstream VQA task. Input robustness refers to the ability of models to defend against visual and linguistic input variations, as well as shortcut learning involved in inputs. Generally, the representations obtained by pretrained VLMs inevitably contain irrelevant and redundant information for a specific downstream task, resulting in statistically spurious correlations and insensitivity to input variations. To encourage representations to converge to a minimal sufficient statistic in multimodal learning, we propose Correlation Information Bottleneck (CIB), which seeks a tradeoff between compression and redundancy in representations by minimizing the mutual information (MI) between inputs and representations while maximizing the MI between outputs and representations. Moreover, we derive a tight theoretical upper bound for the mutual information between multimodal inputs and representations, incorporating different internal correlations that guide models to learn more robust representations and facilitate modality alignment. Extensive experiments consistently demonstrate the effectiveness and superiority of the proposed CIB in terms of input robustness and accuracy.
引用
收藏
页码:185 / 207
页数:23
相关论文
共 50 条
  • [31] A Corpus for Visual Question Answering Annotated with Frame Semantic Information
    Alizadeh, Mehrdad
    Di Eugenio, Barbara
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 5524 - 5531
  • [32] ViCLEVR: a visual reasoning dataset and hybrid multimodal fusion model for visual question answering in Vietnamese
    Tran, Khiem Vinh
    Phan, Hao Phu
    Van Nguyen, Kiet
    Nguyen, Ngan Luu Thuy
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [33] Visual Question Answering on CLEVR Dataset via Multimodal Fusion and Relational Reasoning
    Allahyari, Abbas
    Borna, Keivan
    2021 52ND ANNUAL IRANIAN MATHEMATICS CONFERENCE (AIMC), 2021, : 74 - 76
  • [34] Be flexible! learn to debias by sampling and prompting for robust visual question answering
    Liu, Jin
    Fan, ChongFeng
    Zhou, Fengyu
    Xu, Huijuan
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [35] An Adaptive Multimodal Fusion Network Based on Multilinear Gradients for Visual Question Answering
    Zhao, Chengfang
    Tang, Mingwei
    Zheng, Yanxi
    Ran, Chaocong
    ELECTRONICS, 2025, 14 (01):
  • [36] Robust visual question answering via semantic cross modal augmentation
    Mashrur, Akib
    Luo, Wei
    Zaidi, Nayyar A.
    Robles-Kelly, Antonio
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 238
  • [37] Robust Visual Question Answering Based on Counterfactual Samples and Relationship Perception
    Qin, Hong
    An, Gaoyun
    Ruan, Qiuqi
    IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS, IGTA 2021, 2021, 1480 : 145 - 158
  • [38] HCCL: Hierarchical Counterfactual Contrastive Learning for Robust Visual Question Answering
    Hao, Dongze
    Wang, Qunbo
    Zhu, Xinxin
    Liu, Jing
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)
  • [39] Multimodal grid features and cell pointers for scene text visual question answering
    Gomez, Lluis
    Biten, Ali Furkan
    Tito, Ruben
    Mafla, Andres
    Rusinol, Marcal
    Valveny, Ernest
    Karatzas, Dimosthenis
    PATTERN RECOGNITION LETTERS, 2021, 150 : 242 - 249
  • [40] Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-training
    Su, Tongkun
    Li, Jun
    Zhang, Xi
    Jin, Haibo
    Chen, Hao
    Wang, Qiong
    Lv, Faqin
    Zhao, Baoliang
    Hu, Ying
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT IV, 2024, 15004 : 602 - 612