Understanding deep learning defenses against adversarial examples through visualizations for dynamic risk assessment

被引:3
作者
Echeberria-Barrio, Xabier [1 ]
Gil-Lerchundi, Amaia [1 ]
Egana-Zubia, Jon [1 ]
Orduna-Urrutia, Raul [1 ]
机构
[1] Vicomtech Fdn, Basque Res & Technol Alliance BRTA, Mikeletegi 57, Donostia San Sebastian 20009, Spain
关键词
Adversarial attacks; Adversarial defenses; Visualization; ATTACKS;
D O I
10.1007/s00521-021-06812-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep neural network models have been developed in different fields, where they have brought many advances. However, they have also started to be used in tasks where risk is critical. Misdiagnosis of these models can lead to serious accidents or even death. This concern has led to an interest among researchers to study possible attacks on these models, discovering a long list of vulnerabilities, from which every model should be defended. The adversarial example attack is a widely known attack among researchers, who have developed several defenses to avoid such a threat. However, these defenses are as opaque as a deep neural network model, how they work is still unknown. This is why visualizing how they change the behavior of the target model is interesting in order to understand more precisely how the performance of the defended model is being modified. For this work, three defense strategies, against adversarial example attacks, have been selected in order to visualize the behavior modification of each of them in the defended model. Adversarial training, dimensionality reduction, and prediction similarity were the selected defenses, which have been developed using a model composed of convolution neural network layers and dense neural network layers. In each defense, the behavior of the original model has been compared with the behavior of the defended model, representing the target model by a graph in a visualization. This visualization allows identifying the vulnerabilities of the model and shows how the defenses try to avoid them.
引用
收藏
页码:20477 / 20490
页数:14
相关论文
共 34 条
[11]  
Janowczyk Andrew, 2016, J Pathol Inform, V7, P29, DOI 10.4103/2153-3539.186902
[12]   AcTiVis: Visual Exploration of Industry-Scale Deep Neural Network Models [J].
Kahng, Minsuk ;
Andrews, Pierre Y. ;
Kalro, Aditya ;
Chau, Duen Horng .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2018, 24 (01) :88-97
[13]  
Kurakin A., 2016, Adversarial examples in the physical world
[14]   SafetyNet: Detecting and Rejecting Adversarial Examples Robustly [J].
Lu, Jiajun ;
Issaranon, Theerasit ;
Forsyth, David .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :446-454
[15]  
Madry A., 2018, ARXIV170606083
[16]  
Metzen J.H., 2017, ICLR
[17]   DeepFool: a simple and accurate method to fool deep neural networks [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Frossard, Pascal .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2574-2582
[18]  
Olah C., 2018, DISTILL, V3, pe10, DOI 10.23915/distill.00010
[19]  
Olah Chris, 2019, EXPLORING NEURAL NET
[20]   The Limitations of Deep Learning in Adversarial Settings [J].
Papernot, Nicolas ;
McDaniel, Patrick ;
Jha, Somesh ;
Fredrikson, Matt ;
Celik, Z. Berkay ;
Swami, Ananthram .
1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, :372-387