Minimally Distorted Structured Adversarial Attacks

被引:4
作者
Kazemi, Ehsan [1 ]
Kerdreux, Thomas [2 ]
Wang, Liqiang [1 ]
机构
[1] Univ Cent Florida, Dept Comp Sci, Orlando, FL 32816 USA
[2] TU Univ Berlin, Zuse Inst Berlin, Berlin, Germany
关键词
Adversarial attacks; Blurriness; Group norm; Image classification; WOLFE;
D O I
10.1007/s11263-022-01701-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
White box adversarial perturbations are generated via iterative optimization algorithms most often by minimizing an adversarial loss on a l(p) neighborhood of the original image, the so-called distortion set. Constraining the adversarial search with different norms results in disparately structured adversarial examples. Here we explore several distortion sets with structure-enhancing algorithms. These new structures for adversarial examples might provide challenges for provable and empirical robust mechanisms. Because adversarial robustness is still an empirical field, defense mechanisms should also reasonably be evaluated against differently structured attacks. Besides, these structured adversarial perturbations may allow for larger distortions size than their l(p) counterpart while remaining imperceptible or perceptible as natural distortions of the image. We will demonstrate in this work that the proposed structured adversarial examples can significantly bring down the classification accuracy of adversarially trained classifiers while showing a low l(2) distortion rate. For instance, on ImagNet dataset the structured attacks drop the accuracy of the adversarial model to near zero with only 50% of l(2) distortion generated using white-box attacks like PGD. As a byproduct, our findings on structured adversarial examples can be used for adversarial regularization of models to make models more robust or improve their generalization performance on datasets that are structurally different.
引用
收藏
页码:160 / 176
页数:17
相关论文
共 68 条
[1]  
Allen-Zhu Z., 2017, ADV NEURAL INFORM PR, P6191
[2]  
[Anonymous], 2010, NIPS
[3]  
Athalye A, 2018, Arxiv, DOI arXiv:1802.00420
[4]   Exact Matrix Completion via Convex Optimization [J].
Candes, Emmanuel J. ;
Recht, Benjamin .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2009, 9 (06) :717-772
[5]  
Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Chen JH, 2019, Arxiv, DOI arXiv:1811.10828
[8]  
Cheung E, 2017, Arxiv, DOI arXiv:1704.04285
[9]  
Croce F, 2020, PR MACH LEARN RES, V119
[10]   Sparse and Imperceivable Adversarial Attacks [J].
Croce, Francesco ;
Hein, Matthias .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4723-4731