Diffusion Models for Imperceptible and Transferable Adversarial Attack

被引:10
作者
Chen, Jianqi [1 ,2 ]
Chen, Hao [2 ]
Chen, Keyan [1 ,2 ]
Zhang, Yilan [1 ,2 ]
Zou, Zhengxia [3 ]
Shi, Zhenwei [1 ,2 ]
机构
[1] Beihang Univ, Image Proc Ctr, Sch Astronaut, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
[3] Beihang Univ, Sch Astronaut, Dept Guidance Nav & Control, Beijing 100191, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Diffusion models; Perturbation methods; Closed box; Noise reduction; Solid modeling; Image color analysis; Glass box; Semantics; Gaussian noise; Purification; Adversarial attack; diffusion model; imperceptible attack; transferable attack;
D O I
10.1109/TPAMI.2024.3480519
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many existing adversarial attacks generate L-p -norm perturbations on image RGB space. Despite some achievements in transferability and attack success rate, the crafted adversarial examples are easily perceived by human eyes. Towards visual imperceptibility, some recent works explore unrestricted attacks without L-p -norm constraints, yet lacking transferability of attacking black-box models. In this work, we propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models. Specifically, instead of direct manipulation in pixel space, we craft perturbations in the latent space of diffusion models. Combined with well-designed content-preserving structures, we can generate human-insensitive perturbations embedded with semantic clues. For better transferability, we further "deceive" the diffusion model which can be viewed as an implicit recognition surrogate, by distracting its attention away from the target regions. To our knowledge, our proposed method, DiffAttack , is the first that introduces diffusion models into the adversarial attack field. Extensive experiments conducted across diverse model architectures (CNNs, Transformers, and MLPs), datasets (ImageNet, CUB-200, and Standford Cars), and defense mechanisms underscore the superiority of our attack over existing methods such as iterative attacks, GAN-based attacks, and ensemble attacks. Furthermore, we provide a comprehensive discussion on future research avenues in diffusion-based adversarial attacks, aiming to chart a course for this burgeoning field.
引用
收藏
页码:961 / 977
页数:17
相关论文
共 85 条
[1]  
Bhattad A., 2020, PROC INT C LEARN REP
[2]  
Brendel W., 2018, INT C LEARN REPR
[3]   Contrastive Learning for Fine-Grained Ship Classification in Remote Sensing Images [J].
Chen, Jianqi ;
Chen, Keyan ;
Chen, Hao ;
Li, Wenyuan ;
Zou, Zhengxia ;
Shi, Zhenwei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[4]   A Degraded Reconstruction Enhancement-Based Method for Tiny Ship Detection in Remote Sensing Images With a New Large-Scale Dataset [J].
Chen, Jianqi ;
Chen, Keyan ;
Chen, Hao ;
Zou, Zhengxia ;
Shi, Zhenwei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[5]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[6]  
Chen ZY, 2023, Arxiv, DOI arXiv:2305.10665
[7]  
Clark K, 2023, ADV NEUR IN
[8]  
Couairon G., 2023, P 11 INT C LEARN REP
[9]   Diffusion Models in Vision: A Survey [J].
Croitoru, Florinel-Alin ;
Hondru, Vlad ;
Ionescu, Radu Tudor ;
Shah, Mubarak .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) :10850-10869
[10]  
Dhariwal P, 2021, ADV NEUR IN, V34