Black-box Backdoor Defense via Zero-shot Image Purification

被引:0
作者
Shi, Yucheng [1 ]
Du, Mengnan [2 ]
Wu, Xuansheng [1 ]
Guan, Zihan [1 ]
Sun, Jin [1 ]
Liu, Ninghao [1 ]
机构
[1] Univ Georgia, Sch Comp, Athens, GA 30602 USA
[2] New Jersey Inst Technol, Dept Data Sci, Newark, NJ 07102 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model's deployment. Defending against such attacks is challenging, especially for real-world black-box models where only query access is permitted. In this paper, we propose a novel defense framework against backdoor attacks through Zero-shot Image Purification (ZIP). Our framework can be applied to poisoned models without requiring internal information about the model or any prior knowledge of the clean/poisoned samples. Our defense framework involves two steps. First, we apply a linear transformation (e.g., blurring) on the poisoned image to destroy the backdoor pattern. Then, we use a pre-trained diffusion model to recover the missing semantic information removed by the transformation. In particular, we design a new reverse process by using the transformed image to guide the generation of high-fidelity purified images, which works in zero-shot settings. We evaluate our ZIP framework on multiple datasets with different types of attacks. Experimental results demonstrate the superiority of our ZIP framework compared to state-of-the-art backdoor defense baselines. We believe that our results will provide valuable insights for future defense methods for black-box models. Our code is available at https://github.com/sycny/ZIP.
引用
收藏
页数:31
相关论文
共 69 条
  • [11] Doan Khoa, 2021, ADV NEUR IN, V34
  • [12] Black-box Detection of Backdoor Attacks with Limited Information and Data
    Dong, Yinpeng
    Yang, Xiao
    Deng, Zhijie
    Pang, Tianyu
    Xiao, Zihao
    Su, Hang
    Zhu, Jun
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16462 - 16471
  • [13] Du M, 2019, Arxiv, DOI arXiv:1911.07116
  • [14] STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
    Gao, Yansong
    Xu, Change
    Wang, Derui
    Chen, Shiping
    Ranasinghe, Damith C.
    Nepal, Surya
    [J]. 35TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSA), 2019, : 113 - 125
  • [15] Gu TY, 2019, Arxiv, DOI arXiv:1708.06733
  • [16] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [17] Guan ZH, 2023, Arxiv, DOI arXiv:2308.04406
  • [18] Guo J., 2022, INT C LEARN REPR
  • [19] Guo Junfeng, 2023, 11 INT C LEARN REPR
  • [20] Hayase Jonathan, 2022, arXiv