Align Is Not Enough: Multimodal Universal Jailbreak Attack Against Multimodal Large Language Models

被引:0
作者
Wang, Youze [1 ]
Hu, Wenbo [1 ]
Dong, Yinpeng [2 ]
Liu, Jing [3 ]
Zhang, Hanwang [4 ]
Hong, Richang [1 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Safety; Large language models; Electronic mail; Watermarking; Robustness; Biological system modeling; Visualization; Circuits and systems; Social networking (online); Reviews; Multimodal large language models; adversarial attack; jailbreak attack;
D O I
10.1109/TCSVT.2025.3526248
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Large Language Models (LLMs) have evolved into Multimodal Large Language Models (MLLMs), significantly enhancing their capabilities by integrating visual information and other types, thus aligning more closely with the nature of human intelligence, which processes a variety of data forms beyond just text. Despite advancements, the undesirable generation of these models remains a critical concern, particularly due to vulnerabilities exposed by text-based jailbreak attacks, which have represented a significant threat by challenging existing safety protocols. Motivated by the unique security risks posed by the integration of new and old modalities for MLLMs, we propose a unified multimodal universal jailbreak attack framework that leverages iterative image-text interactions and transfer-based strategy to generate a universal adversarial suffix and image. Our work not only highlights the interaction of image-text modalities can be used as a critical vulnerability but also validates that multimodal universal jailbreak attacks can bring higher-quality undesirable generations across different MLLMs. We evaluate the undesirable context generation of MLLMs like LLaVA, Yi-VL, MiniGPT4, MiniGPT-v2, and InstructBLIP, and reveal significant multimodal safety alignment issues, highlighting the inadequacy of current safety mechanisms against sophisticated multimodal attacks. This study underscores the urgent need for robust safety measures in MLLMs, advocating for a comprehensive review and enhancement of security protocols to mitigate potential risks associated with multimodal capabilities.
引用
收藏
页码:5475 / 5488
页数:14
相关论文
empty
未找到相关数据