Robustness-via-synthesis: Robust training with generative adversarial perturbations

被引:5
作者
Baytas, Inci M. [1 ]
Deb, Debayan [2 ]
机构
[1] Bogazici Univ, Dept Comp Engn, Istanbul, Turkiye
[2] LENS Inc, Okemos, MI USA
关键词
Adversarial robustness; Adversarial training; Adversarial attacks synthesis; Optimal transport; Projected gradient descent; ATTACKS; DEFENSE;
D O I
10.1016/j.neucom.2022.10.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Upon the discovery of adversarial attacks, robust models have become obligatory for deep learning-based systems. Adversarial training with first-order attacks has been one of the most effective defenses against adversarial perturbations to this day. The majority of the adversarial training approaches focus on iteratively perturbing each pixel with the gradient of the loss function with respect to the input image. However, the adversarial training with gradient-based attacks lacks diversity and does not generalize well to natural images and various attacks. This study presents a robust training algorithm where the adversarial perturbations are automatically synthesized from a random vector using a generator network. The classifier is trained with cross-entropy loss regularized with the optimal transport distance between the representations of the natural and synthesized adversarial samples. Unlike prevailing generative defenses, the proposed one-step attack generation framework synthesizes diverse perturbations without utilizing the gradient of the classifier's loss. The main contributions of the proposed robust training framework are: i) preserving the state-of-the-art generalization performance of the deep model, ii) not requiring an iterative or recursive scheme, and iii) providing robustness that is comparable with the state-of-the-art in literature. Experimental results show that the proposed approach attains comparable robustness with various gradient-based and generative robust training techniques on CIFAR10, CIFAR100, SVHN, and Tiny ImageNet datasets. In addition, compared to the baselines, the proposed robust training framework generalizes well to the natural samples. Code and trained models are available here https:// github.com/ALLab-Boun/robustness-via-synthesis.git.
引用
收藏
页码:49 / 60
页数:12
相关论文
共 52 条
  • [1] Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
    Alzubaidi, Laith
    Zhang, Jinglan
    Humaidi, Amjad J.
    Al-Dujaili, Ayad
    Duan, Ye
    Al-Shamma, Omran
    Santamaria, J.
    Fadhel, Mohammed A.
    Al-Amidie, Muthana
    Farhan, Laith
    [J]. JOURNAL OF BIG DATA, 2021, 8 (01)
  • [2] Adversarial defense via self-orthogonal randomization super-network
    Bian, Huanyu
    Chen, Dongdong
    Zhang, Kui
    Zhou, Hang
    Dong, Xiaoyi
    Zhou, Wenbo
    Zhang, Weiming
    Yu, Nenghai
    [J]. NEUROCOMPUTING, 2021, 452 : 147 - 158
  • [3] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [4] Security control for Markov jump system with adversarial attacks and unknown transition rates via adaptive sliding mode technique
    Chen, Bei
    Niu, Yugang
    Zou, Yuanyuan
    [J]. JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2019, 356 (06): : 3333 - 3352
  • [5] Fuzzy Fault Detection for Markov Jump Systems With Partly Accessible Hidden Information: An Event-Triggered Approach
    Cheng, Peng
    He, Shuping
    Stojanovic, Vladimir
    Luan, Xiaoli
    Liu, Fei
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (08) : 7352 - 7361
  • [6] Choromanska A, 2015, JMLR WORKSH CONF PRO, V38, P192
  • [7] AdvFaces: Adversarial Face Synthesis
    Deb, Debayan
    Zhang, Jianbang
    Jain, Anil K.
    [J]. IEEE/IAPR INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2020), 2020,
  • [8] Dong Yinpeng, 2020, Advances in Neural Information Processing Systems, V33
  • [9] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    [J]. SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [10] Genevay A, 2017, Arxiv, DOI arXiv:1706.01807