The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?

被引:0
作者
Zhao, Qinyu [1 ]
Xu, Ming [1 ]
Gupta, Kartik [2 ]
Asthana, Akshay [2 ]
Zheng, Liang [1 ]
Gould, Stephen [1 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] Seeing Machines Ltd, Fyshwick, Australia
来源
COMPUTER VISION - ECCV 2024, PT XLVIII | 2025年 / 15106卷
基金
澳大利亚研究理事会;
关键词
Large Vision-Language Models; Logit Distribution; First Token; Hidden Knowledge; Linear Probing;
D O I
10.1007/978-3-031-73195-2_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large vision-language models (LVLMs), designed to interpret and respond to human instructions, occasionally generate hallucinated or harmful content due to inappropriate instructions. This study uses linear probing to shed light on the hidden knowledge at the output layers of LVLMs. We demonstrate that the logit distributions of the first tokens contain sufficient information to determine whether to respond to the instructions, including recognizing unanswerable visual questions, defending against jailbreaking attacks, and identifying deceptive questions. Such hidden knowledge is gradually lost in logits of subsequent tokens during response generation. Then, we illustrate a simple decoding strategy at the generation of the first token, effectively improving the generated content. In experiments, we find a few interesting insights: First, the CLIP model already contains a strong signal for solving these tasks, which indicates potential bias in the existing datasets. Second, we observe performance improvement by utilizing the first logit distributions on three additional tasks, including indicating uncertainty in math solving, mitigating hallucination, and image classification. Last, with the same training data, simply finetuning LVLMs improves models' performance but is still inferior to linear probing on these tasks (Our code is available at https://github.com/Qinyu-Allen- Zhao/LVLM-LP).
引用
收藏
页码:127 / 142
页数:16
相关论文
empty
未找到相关数据