Black-box Backdoor Defense via Zero-shot Image Purification

被引:0
作者
Shi, Yucheng [1 ]
Du, Mengnan [2 ]
Wu, Xuansheng [1 ]
Guan, Zihan [1 ]
Sun, Jin [1 ]
Liu, Ninghao [1 ]
机构
[1] Univ Georgia, Sch Comp, Athens, GA 30602 USA
[2] New Jersey Inst Technol, Dept Data Sci, Newark, NJ 07102 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model's deployment. Defending against such attacks is challenging, especially for real-world black-box models where only query access is permitted. In this paper, we propose a novel defense framework against backdoor attacks through Zero-shot Image Purification (ZIP). Our framework can be applied to poisoned models without requiring internal information about the model or any prior knowledge of the clean/poisoned samples. Our defense framework involves two steps. First, we apply a linear transformation (e.g., blurring) on the poisoned image to destroy the backdoor pattern. Then, we use a pre-trained diffusion model to recover the missing semantic information removed by the transformation. In particular, we design a new reverse process by using the transformed image to guide the generation of high-fidelity purified images, which works in zero-shot settings. We evaluate our ZIP framework on multiple datasets with different types of attacks. Experimental results demonstrate the superiority of our ZIP framework compared to state-of-the-art backdoor defense baselines. We believe that our results will provide valuable insights for future defense methods for black-box models. Our code is available at https://github.com/sycny/ZIP.
引用
收藏
页数:31
相关论文
共 69 条
  • [41] ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation
    Liu, Yingqi
    Lee, Wen-Chuan
    Tao, Guanhong
    Ma, Shiqing
    Aafer, Yousra
    Zhang, Xiangyu
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 1265 - 1282
  • [42] May Brandon B, ICLR 2023 WORKSH BAC
  • [43] Nguyen A. T., 2021, INT C LEARN REPR
  • [44] Nie WL, 2022, PR MACH LEARN RES
  • [45] Ninghao Liu, 2021, ACM SIGKDD Explorations Newsletter, V23, P86, DOI 10.1145/3468507.3468519
  • [46] Predicting the Future - Big Data, Machine Learning, and Clinical Medicine
    Obermeyer, Ziad
    Emanuel, Ezekiel J.
    [J]. NEW ENGLAND JOURNAL OF MEDICINE, 2016, 375 (13) : 1216 - 1219
  • [47] Qi X, 2023, 11 INT C LEARN REPR
  • [48] DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
    Qiu, Han
    Zeng, Yi
    Guo, Shangwei
    Zhang, Tianwei
    Qiu, Meikang
    Thuraisingham, Bhavani
    [J]. ASIA CCS'21: PROCEEDINGS OF THE 2021 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 363 - 377
  • [49] Deep null space learning for inverse problems: convergence analysis and rates
    Schwab, Johannes
    Antholzer, Stephan
    Haltmeier, Markus
    [J]. INVERSE PROBLEMS, 2019, 35 (02)
  • [50] Shen Guangyu, 2022, INT C MACHINE LEARNI, P19879