Revisiting model's uncertainty and confidences for adversarial example detection

被引:19
作者
Aldahdooh, Ahmed [1 ]
Hamidouche, Wassim [1 ]
Deforges, Olivier [1 ]
机构
[1] Univ Rennes, CNRS, INSA Rennes, IETR UMR 6164, F-35000 Rennes, France
关键词
Adversarial examples; Adversarial attacks; Adversarial example detection; Deep learning robustness;
D O I
10.1007/s10489-022-03373-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Security-sensitive applications that rely on Deep Neural Networks (DNNs) are vulnerable to small perturbations that are crafted to generate Adversarial Examples. The (AEs) are imperceptible to humans and cause DNN to misclassify them. Many defense and detection techniques have been proposed. Model's confidences and Dropout, as a popular way to estimate the model's uncertainty, have been used for AE detection but they showed limited success against black- and gray-box attacks. Moreover, the state-of-the-art detection techniques have been designed for specific attacks or broken by others, need knowledge about the attacks, are not consistent, increase model parameters overhead, are time-consuming, or have latency in inference time. To trade off these factors, we revisit the model's uncertainty and confidences and propose a novel unsupervised ensemble AE detection mechanism that 1) uses the uncertainty method called SelectiveNet, 2) processes model layers outputs, i.e. feature maps, to generate new confidence probabilities. The detection method is called SFAD. Experimental results show that the proposed approach achieves better performance against black- and gray-box attacks than the state-of-the-art methods and achieves comparable performance against white-box attacks. Moreover, results show that SFAD is fully robust against High Confidence Attacks (HCAs) for MNIST and partially robust for CIFAR10 datasets.(1)
引用
收藏
页码:509 / 531
页数:23
相关论文
共 82 条
[1]  
Aigrain J, 2019, ARXIV190509186 CORR
[2]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[3]   Adversarial example detection for DNN models: a review and experimental comparison [J].
Aldahdooh, Ahmed ;
Hamidouche, Wassim ;
Fezza, Sid Ahmed ;
Deforges, Olivier .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (06) :4403-4462
[4]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[5]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[6]  
[Anonymous], 2015, Towards deep neural network architectures robust to adversarial examples
[7]  
Athalye A, 2018, PR MACH LEARN RES, V80
[8]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[9]  
Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
[10]   Defending Against Universal Attacks Through Selective Feature Regeneration [J].
Borkar, Tejas ;
Heide, Felix ;
Karam, Lina .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :706-716