Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models

被引:0
|
作者
Zhang, Jinrui [1 ]
Wang, Teng [1 ,2 ]
Zhang, Haigang [3 ]
Lu, Ping [4 ]
Zheng, Feng [1 ,5 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Univ Hong Kong, Hong Kong, Peoples R China
[3] Shenzhen Polytech Univ, Shenzhen, Peoples R China
[4] ZTE Corp, Cloud Comp & IT Inst, Shenzhen, Peoples R China
[5] Peng Cheng Lab, Res Inst Multiple Agents & Embodied Intelligence, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Large Vision Language Models; Visual Instruction Tuning; Hallucination Mitigation;
D O I
10.1007/978-3-031-73113-6_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large vision-language models (LVLMs) have shown promising performance on a variety of vision-language tasks. However, they remain susceptible to hallucinations, generating outputs misaligned with visual content or instructions. While various mitigation strategies have been proposed, they often neglect a key contributor to hallucinations: lack of fine-grained reasoning supervision during training. Without intermediate reasoning steps, models may establish superficial shortcuts between instructions and responses, failing to internalize the inherent reasoning logic. To address this challenge, we propose reflective instruction tuning, which integrates rationale learning into visual instruction tuning. Unlike previous methods that learning from responses only, our approach entails the model predicting rationales justifying why responses are correct or incorrect. This fosters a deeper engagement with the fine-grained reasoning underlying each response, thus enhancing the model's reasoning proficiency. To facilitate this approach, we propose REVERIE, the first large-scale instruction-tuning dataset with ReflEctiVE RatIonalE annotations. REVERIE comprises 115k machine-generated reasoning instructions, each meticulously annotated with a corresponding pair of correct and confusing responses, alongside comprehensive rationales elucidating the justification behind the correctness or erroneousness of each response. Experimental results on multiple LVLM benchmarks reveal that reflective instruction tuning with the REVERIE dataset yields noticeable performance gain over the baseline model, demonstrating the effectiveness of reflecting from the rationales. Project page is at https://zjr2000.github.io/projects/reverie
引用
收藏
页码:196 / 213
页数:18
相关论文
共 50 条
  • [41] Correctable Landmark Discovery via Large Models for Vision-Language Navigation
    Lin, Bingqian
    Nie, Yunshuang
    Wei, Ziming
    Zhu, Yi
    Xu, Hang
    Ma, Shikui
    Liu, Jianzhuang
    Liang, Xiaodan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 8534 - 8548
  • [42] Unveiling Vulnerabilities in Large Vision-Language Models: The SAVJ Jailbreak Approach
    Zhang, Gang
    Fan, Xiaowei
    Fang, Jingquan
    Sun, Yanna
    Shi, Xiayang
    Lu, Chunyang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V, 2024, 15020 : 417 - 434
  • [43] AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models
    Gu, Zhaopeng
    Zhu, Bingke
    Zhu, Guibo
    Chen, Yingying
    Tang, Ming
    Wang, Jinqiao
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 3, 2024, : 1932 - 1940
  • [44] Debiasing vision-language models for vision tasks: a survey
    Zhu, Beier
    Zhang, Hanwang
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (01)
  • [45] Phased Instruction Fine-Tuning for Large Language Models
    Pang, Wei
    Zhou, Chuan
    Zhou, Xiao-Hua
    Wang, Xiaojie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5735 - 5748
  • [46] A survey of efficient fine-tuning methods for Vision-Language Models - Prompt and Adapter
    Xing, Jialu
    Liu, Jianping
    Wang, Jian
    Sun, Lulu
    Chen, Xi
    Gu, Xunxun
    Wang, Yingfei
    COMPUTERS & GRAPHICS-UK, 2024, 119
  • [47] Multi-task prompt tuning with soft context sharing for vision-language models
    Ding, Kun
    Wang, Ying
    Liu, Pengzhang
    Yu, Qiang
    Zhang, Haojian
    Xiang, Shiming
    Pan, Chunhong
    NEUROCOMPUTING, 2024, 603
  • [48] BioInstruct: instruction tuning of large language models for biomedical natural language processing
    Tran, Hieu
    Yang, Zhichao
    Yao, Zonghai
    Yu, Hong
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09) : 1821 - 1832
  • [49] Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Models
    Cheng, Hao
    Xiao, Erjia
    Gu, Jindong
    Yang, Le
    Duan, Jinhao
    Zhang, Jize
    Cao, Jiahang
    Xu, Kaidi
    Xu, Renjing
    COMPUTER VISION - ECCV 2024, PT LIX, 2025, 15117 : 179 - 196
  • [50] Robust Calibration of Large Vision-Language Adapters
    Murugesan, Balamurali
    Silva-Rodriguez, Julio
    Ben Ayed, Ismail
    Dolz, Jose
    COMPUTER VISION - ECCV 2024, PT XXIV, 2025, 15082 : 147 - 165