Zero-shot Visual Question Answering with Language Model Feedback

被引:0
作者
Du, Yifan [1 ,4 ]
Li, Junyi [1 ,3 ]
Tang, Tianyi [1 ]
Zhao, Wayne Xin [1 ,4 ]
Wen, Ji-Rong [1 ,2 ,4 ]
机构
[1] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
[2] Renmin Univ China, Sch Informat, Beijing, Peoples R China
[3] Univ Montreal, DIRO, Montreal, PQ, Canada
[4] Beijing Key Lab Big Data Management & Anal Method, Beijing, Peoples R China
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023) | 2023年
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA). Our approach employs the generated captions by a captioning model as the context of an answer prediction model, which is a Pre-trained Language model (PLM). As the major contribution, we leverage the guidance and feedback of the prediction model to improve the capability of the captioning model. In this way, the captioning model can become aware of the task goal and information need from the PLM. To develop our approach, we design two specific training stages, where the first stage adapts the captioning model to the prediction model (selecting more suitable caption propositions for training) and the second stage tunes the captioning model according to the task goal (learning from feedback of the PLM). Extensive experiments demonstrate the effectiveness of the proposed approach on the knowledge-based VQA task. Specifically, on the challenging A-OKVQA dataset, LAMOC outperforms several competitive zero-shot methods and even achieves comparable results to a fine-tuned VLP model. Our code is publicly available at https://github.com/RUCAIBox/LAMOC.
引用
收藏
页码:9268 / 9281
页数:14
相关论文
共 40 条
[1]  
Alayrac Jean-Baptiste, 2022, arXiv
[2]  
[Anonymous], 2023, ARXIV, DOI DOI 10.1109/ICCV51070.2023.00297
[3]  
Brown TB, 2020, ADV NEUR IN, V33
[4]  
Campos Jon Ander, 2022, ACL WORKSH LEARN NAT
[5]  
Cho J, 2021, PR MACH LEARN RES, V139
[6]  
Chung H.W., 2022, ARXIV
[7]  
Dai WL, 2022, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), P2383
[8]  
Deng Mingkai, 2022, arXiv
[9]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[10]  
Fan A, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P889