Mind the Box: l1-APGD for Sparse Adversarial Attacks on Image Classifiers

被引:0
作者
Croce, Francesco [1 ]
Hein, Matthias [1 ]
机构
[1] Univ Tubingen, Tubingen, Germany
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139 | 2021年 / 139卷
关键词
POPULATION; SAMPLES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that when taking into account also the image domain [0, 1](d), established l(1)-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the l(1)-ball and [0, 1](d). We study the expected sparsity of the steepest descent step for this effective threat model and show that the exact projection onto this set is computationally feasible and yields better performance. Moreover, we propose an adaptive form of PGD which is highly effective even with a small budget of iterations. Our resulting l(1)-APGD is a strong white-box attack showing that prior works over-estimated their l(1)-robustness. Using l(1)-APGD for adversarial training we get a robust classifier with SOTA l(1)-robustness. Finally, we combine l(1)-APGD and an adaptation of the Square Attack to l(1) into l(1)-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of l(1)-ball intersected with [0, 1](d).
引用
收藏
页数:11
相关论文
共 40 条
[1]  
Andriushchenko Maksym, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12368), P484, DOI 10.1007/978-3-030-58592-1_29
[2]  
[Anonymous], 2020, ICLR
[3]  
[Anonymous], 2018, AAAI
[4]  
[Anonymous], 2019, NEURIPS
[5]  
[Anonymous], 2020, ICLR
[6]  
[Anonymous], 2019, NEURIPS
[7]  
[Anonymous], 2020, ICML
[8]  
[Anonymous], 2019, CVPR
[9]  
Athalye A, 2018, PR MACH LEARN RES, V80
[10]  
Augustin Maximilian, 2020, ECCV