A Learning and Masking Approach to Secure Learning

被引:21
作者
Linh Nguyen [1 ]
Wang, Sky [1 ]
Sinha, Arunesh [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
来源
DECISION AND GAME THEORY FOR SECURITY, GAMESEC 2018 | 2018年 / 11199卷
关键词
D O I
10.1007/978-3-030-01554-1_26
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Further, we categorize attacks in literature into high and low perturbation attacks. Next, we show that the defense problem can be posed as a learning problem itself and find that this approach effective against high perturbation attacks. For low perturbation attacks, we present a classifier boundary masking method that uses noise to randomly shift the classifier boundary at runtime. We also show that both our learning and masking based defense can work simultaneously to protect against multiple attacks. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets.
引用
收藏
页码:453 / 464
页数:12
相关论文
共 24 条
[1]  
[Anonymous], 2017, ARXIV
[2]  
[Anonymous], 2016, SOK SCI SECURITY PRI
[3]  
[Anonymous], 2005, ACM SIGKDD
[4]  
[Anonymous], EVALUATION DEFENSIVE
[5]  
[Anonymous], 2014, Advances in Neural Information Processing Systems.
[6]  
[Anonymous], 2017, ARXIV171100851
[7]  
[Anonymous], ARXIV161207767
[8]  
[Anonymous], 2016, ARXIV161200334
[9]  
[Anonymous], C AUT AG MULT SYST
[10]  
[Anonymous], ABS170309387 CORR