Defending Against Universal Attacks Through Selective Feature Regeneration

被引:26
作者
Borkar, Tejas [1 ]
Heide, Felix [2 ,3 ]
Karam, Lina [1 ,4 ]
机构
[1] Arizona State Univ, Tempe, AZ 85287 USA
[2] Princeton Univ, Princeton, NJ 08544 USA
[3] Algolux, Montreal, PQ, Canada
[4] Lebanese Amer Univ, Beirut, Lebanon
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2020年
关键词
ROBUSTNESS;
D O I
10.1109/CVPR42600.2020.00079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural network (DNN) predictions have been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, image-agnostic (universal adversarial) perturbations added to any image can fool a target network into making erroneous predictions. Departing from existing defense strategies that work mostly in the image domain, we present a novel defense which operates in the DNN feature domain and effectively defends against such universal perturbations. Our approach identifies pre-trained convolutional features that are most vulnerable to adversarial noise and deploys trainable feature regeneration units which transform these DNN filter activations into resilient features that are robust to universal perturbations. Regenerating only the top 50% adversarially susceptible activations in at most 6 DNN layers and leaving all remaining DNN activations unchanged, we outperform existing defense strategies across different network architectures by more than 10% in restored accuracy. We show that without any additional modification, our defense trained on ImageNet with one type of universal attack examples effectively defends against other types of unseen universal attacks.
引用
收藏
页码:706 / 716
页数:11
相关论文
共 66 条
  • [11] Bengio S., 2017, P ICLR
  • [12] Calkins H, 2017, J ARRYTHM, V33, P369, DOI 10.1016/j.joa.2017.08.001
  • [13] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [14] Chatfield Ken, 2014, BMVC, V7
  • [15] Christian S., 2015, P IEEE C COMP VIS PA, DOI [DOI 10.1109/CVPR.2015.7298594, 10.1109/CVPR.2015.7298594]
  • [16] SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
    Das, Nilaksh
    Shanbhogue, Madhuri
    Chen, Shang-Tse
    Hohman, Fred
    Li, Siwei
    Chen, Li
    Kounavis, Michael E.
    Chau, Duen Horng
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 196 - 204
  • [17] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [18] Dziugaite G.K., 2016, CORR
  • [19] Engstrom Logan, 2018, CORR
  • [20] Eykholt K., 2017, CoRR