Black-box Backdoor Defense via Zero-shot Image Purification

被引:0
作者
Shi, Yucheng [1 ]
Du, Mengnan [2 ]
Wu, Xuansheng [1 ]
Guan, Zihan [1 ]
Sun, Jin [1 ]
Liu, Ninghao [1 ]
机构
[1] Univ Georgia, Sch Comp, Athens, GA 30602 USA
[2] New Jersey Inst Technol, Dept Data Sci, Newark, NJ 07102 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model's deployment. Defending against such attacks is challenging, especially for real-world black-box models where only query access is permitted. In this paper, we propose a novel defense framework against backdoor attacks through Zero-shot Image Purification (ZIP). Our framework can be applied to poisoned models without requiring internal information about the model or any prior knowledge of the clean/poisoned samples. Our defense framework involves two steps. First, we apply a linear transformation (e.g., blurring) on the poisoned image to destroy the backdoor pattern. Then, we use a pre-trained diffusion model to recover the missing semantic information removed by the transformation. In particular, we design a new reverse process by using the transformed image to guide the generation of high-fidelity purified images, which works in zero-shot settings. We evaluate our ZIP framework on multiple datasets with different types of attacks. Experimental results demonstrate the superiority of our ZIP framework compared to state-of-the-art backdoor defense baselines. We believe that our results will provide valuable insights for future defense methods for black-box models. Our code is available at https://github.com/sycny/ZIP.
引用
收藏
页数:31
相关论文
共 69 条
  • [1] Blended Diffusion for Text-driven Editing of Natural Images
    Avrahami, Omri
    Lischinski, Dani
    Fried, Ohad
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18187 - 18197
  • [2] Carlini N., 2021, arXiv
  • [3] Chai SW, 2022, Arxiv, DOI arXiv:2207.04497
  • [4] Chen K., 2021, arXiv
  • [5] Chen Kangjie, 11 INT C LEARN REPR
  • [6] Chen Weixin, 2022, Advances in Neural Information Processing Systems
  • [7] Chen X., 2022, Advances in Neural Information Processing Systems, V35, P33876
  • [8] Chen XY, 2017, Arxiv, DOI arXiv:1712.05526
  • [9] Dhariwal P, 2021, ADV NEUR IN, V34
  • [10] Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
    Doan, Bao Gia
    Abbasnejad, Ehsan
    Ranasinghe, Damith C.
    [J]. 36TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2020), 2020, : 897 - 912