EXPLORING ADVERSARIAL ATTACKS AND DEFENSES IN DEEP LEARNING

被引:1
作者
Thangaraju, Pajun [1 ]
Merkel, Cory [1 ]
机构
[1] Rochester Inst Technol, Comp Engn Dept, Rochester, NY 14623 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT | 2022年
关键词
Adversarial Attacks; Defenses; FGSM; PGD; Carlini-Wagner; DDSA; CleverHans; Convolutional Neural Networks; Deep Learning; MNIST; Image Classification;
D O I
10.1109/CONECCT55679.2022.9865841
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The paper aims to take a deep dive into one of the emerging fields in Deep Learning namely, Adversarial attacks and defenses. We will first see what we mean when we talk of Adversarial examples and learn why they are important? After this, we will explore different types of Adversarial attacks and defenses. Here, we specifically tackle the cases associated with Image Classification. This is done by delving into their respective concepts along with understanding the tools and frameworks required to execute them. The implementation of the FGSM (Fast Gradient Signed Method) attack and the effectiveness of the Adversarial training defense to combat it are discussed. This is done by first analyzing the drop in accuracy from performing the FGSM attack on a MNIST CNN (Convolutional Neural Network) classifier followed by an improvement in the same accuracy metric by defending against the attack using the Adversarial training defense.
引用
收藏
页数:6
相关论文
共 10 条
[1]  
[Anonymous], CLEVERHANS LABCLEVER
[2]   DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder [J].
Bakhti, Yassine ;
Fezza, Sid Ahmed ;
Hamidouche, Wassim ;
Deforges, Olivier .
IEEE ACCESS, 2019, 7 :160397-160407
[3]  
Gupta D., 2021, Lecture Notes on Data Engineering and Communications Technologies, P103, DOI [10.1007/978-981-16-6636-09, DOI 10.1007/978-981-16-6636-09]
[4]  
L. Y. P, 2020, KejiTech
[5]  
Neocleous C., 2014, P INT C NEUR COMP TH, DOI [10.5220/0005152503060309, DOI 10.5220/0005152503060309]
[6]   Adversarial Attacks and Defenses in Deep Learning [J].
Ren, Kui ;
Zheng, Tianhang ;
Qin, Zhan ;
Liu, Xue .
ENGINEERING, 2020, 6 (03) :346-360
[7]  
Theiler S., 2021, Medium
[8]   Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks [J].
Wang, Jianyu ;
Zhang, Haichao .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6628-6637
[9]  
Yumi, Yumi's blog. Learn the Carlini and Wagner's adversarial attack-MNIST
[10]   Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations [J].
Zhang, Chaoning ;
Benz, Philipp ;
Imtiaz, Tooba ;
Kweon, In So .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14509-14518