Analysis of classifiers' robustness to adversarial perturbations

被引:146
作者
Fawzi, Alhussein [1 ]
Fawzi, Omar [2 ]
Frossard, Pascal [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Signal Proc Lab LTS4, Lausanne, Switzerland
[2] ENS Lyon, LIP, Lyon, France
关键词
Adversarial examples; Classification robustness; Random noise; Instability; Deep networks; ERROR-BOUNDS; STABILITY; SYSTEMS;
D O I
10.1007/s10994-017-5663-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes.
引用
收藏
页码:481 / 508
页数:28
相关论文
共 38 条
[1]  
[Anonymous], 2008, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
[2]  
[Anonymous], BRIT MACH VIS C BMVC
[3]  
[Anonymous], 2010, Journal of Machine Learning Research, DOI DOI 10.5555/1756006.1859899
[4]  
[Anonymous], 2014, Towards deep neural network architectures robust to adversarial examples
[5]  
Barreno Marco, 2006, P 2006 ACM S INFORM, P16
[6]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[7]  
Bhatia Rajendra, 2013, Matrix analysis, V169, DOI [10.1007/978-1-4612-0653-8, DOI 10.1007/978-1-4612-0653-8]
[8]  
Biggio B, 2012, ARXIV12066389
[9]  
Biggio B., 2013, P 2013 EUROPEAN C MA, DOI DOI 10.1007/978-3-642-40994-3_25
[10]   Stability and generalization [J].
Bousquet, O ;
Elisseeff, A .
JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (03) :499-526