Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks

被引:6
作者
Sivaslioglu, Salved [1 ]
Catak, Ferhat Ozgur [2 ]
Gul, Ensar [3 ]
机构
[1] TUBITAK BILGEM, Kocaeli, Turkey
[2] TUBITAK BILGEM, Siber Guvenl Enstitusu, Kocaeli, Turkey
[3] Istanbul Sehir Univ, Bilgi Guvenligi Muhendisligi, Istanbul, Turkey
来源
2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU) | 2019年
关键词
Autoencoder; Machine learning; Adversarial robustness; Adversarial attacks;
D O I
10.1109/siu.2019.8806432
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nowadays, machine learning is being used widely. There have also been attacks towards machine learning process. In this study, robustness against machine learning model attacks which cause many results such as misclassification, disruption of decision mechanisms and avoidance of filters has been shown by autoencoding and with non-targeted attacks to a model trained with Mnist dataset. In this work, the results and improvements for the most common and important attack method, non-targeted attack are presented.
引用
收藏
页数:4
相关论文
共 11 条
[1]  
[Anonymous], 2018, ARXIV180400057
[2]  
[Anonymous], 2018, EV ROB CLASS BIN DOM, DOI [10.1145/3186282, DOI 10.1145/3186282]
[3]  
[Anonymous], 2018, ARXIV181000144
[4]  
Athalye A, 2018, PR MACH LEARN RES, V80
[5]  
Harding Prashanth Rajivan, 2018, HUMAN DECISIONS TARG
[6]  
Huang Joseph, 2011, ADVERSARIAL MACHINE, DOI [10.1145/2046684.2046692, DOI 10.1145/2046684.2046692]
[7]  
Kurakin A, 2018, SPRING SER CHALLENGE, P195, DOI 10.1007/978-3-319-94042-7_11
[8]  
Madry A, 2018, INT C LEARN REPR
[9]  
Pinto L., 2017, ARXIV170302702
[10]  
Wagner David, 2016, ARXIV160804644