Causal Inference with Knowledge Distilling and Curriculum Learning for Unbiased VQA

被引:28
作者
Pan, Yonghua [1 ]
Li, Zechao [1 ]
Zhang, Liyan [2 ]
Tang, Jinhui [1 ]
机构
[1] Nanjing Univ Sci & Technol, 200 Xiaolingwei St, Nanjing 210094, Jiangsu, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, 29 Yudao St, Nanjing 210016, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Visual question answering; neural networks; knowledge distillation; causal inference;
D O I
10.1145/3487042
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, many Visual Question Answering (VQA) models rely on the correlations between questions and answers yet neglect those between the visual information and the textual information. They would perform badly if the handled data distribute differently from the training data (i.e., out-of-distribution (OOD) data). Towards this end, we propose a two-stage unbiased VQA approach that addresses the unbiased issue from a causal perspective. In the causal inference stage, we mark the spurious correlation on the causal graph, explore the counterfactual causality, and devise a causal target based on the inherent correlations between the conventional and counterfactual VQA models. In the distillation stage, we introduce the causal target into the training process and leverages distilling as well as curriculum learning to capture the unbiased model. Since Causal Inference with Knowledge Distilling and Curriculum Learning (CKCL) reinforces the contribution of the visual information and eliminates the impact of the spurious correlation by distilling the knowledge in causal inference to the VQA model, it contributes to the good performance on both the standard data and out-of-distribution data. The extensive experimental results on VQA-CP v2 dataset demonstrate the superior performance of the proposed method compared to the state-of-the-art (SotA) methods.
引用
收藏
页数:23
相关论文
共 50 条
[31]   CauseKG: A Framework Enhancing Causal Inference With Implicit Knowledge Deduced From Knowledge Graphs [J].
Huang, Hao ;
Vidal, Maria-Esther .
IEEE ACCESS, 2024, 12 :61810-61827
[32]   Light-Weight Deformable Registration Using Adversarial Learning With Distilling Knowledge [J].
Tran, Minh Q. ;
Tuong Do ;
Huy Tran ;
Tjiputra, Erman ;
Tran, Quang D. ;
Anh Nguyen .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (06) :1443-1453
[33]   Developing a novel causal inference algorithm for personalized biomedical causal graph learning using meta machine learning [J].
Wu, Hang ;
Shi, Wenqi ;
Wang, May D. .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2024, 24 (01)
[34]   Causal Inference-Based Debiasing Framework for Knowledge Graph Completion [J].
Ren, Lin ;
Liu, Yongbin ;
Ouyang, Chunping .
SEMANTIC WEB, ISWC 2023, PART I, 2023, 14265 :328-347
[35]   Real-World Evidence, Causal Inference, and Machine Learning [J].
Crown, William H. .
VALUE IN HEALTH, 2019, 22 (05) :587-592
[36]   Can algorithms replace expert knowledge for causal inference? A case study on novice use of causal discovery [J].
Gururaghavendran, Rajesh ;
Murray, Eleanor J. .
AMERICAN JOURNAL OF EPIDEMIOLOGY, 2025, 194 (05) :1399-1409
[37]   UNIDEAL: CURRICULUM KNOWLEDGE DISTILLATION FEDERATED LEARNING [J].
Yang, Yuwen ;
Liu, Chang ;
Cai, Xun ;
Huang, Suizhi ;
Lu, Hongtao ;
Ding, Yue .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, :7145-7149
[38]   CLICK: Integrating Causal Inference and Commonsense Knowledge Incorporation for Counterfactual Story Generation [J].
Li, Dandan ;
Guo, Ziyu ;
Liu, Qing ;
Jin, Li ;
Zhang, Zequn ;
Wei, Kaiwen ;
Li, Feng .
ELECTRONICS, 2023, 12 (19)
[39]   Prior Knowledge-driven Dynamic Scene Graph Generation with Causal Inference [J].
Lu, Jiale ;
Chen, Lianggangxu ;
Song, Youqi ;
Lin, Shaohui ;
Wang, Changbo ;
He, Gaoqi .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, :4877-4885
[40]   A theoretical analysis based on causal inference and single-instance learning [J].
Wang, Chao ;
Lu, Xuantao ;
Wang, Wei .
APPLIED INTELLIGENCE, 2022, 52 (12) :13902-13915