Evasion and Causative Attacks with Adversarial Deep Learning

被引:0
作者
Shi, Yi [1 ]
Sagduyu, Yalin E. [1 ]
机构
[1] Intelligent Automat Inc, Rockville, MD 20855 USA
来源
MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM) | 2017年
关键词
Machine learning; deep learning; cybersecurity; exploratory attack; evasion attack; causative attack; attack mitigation; text classification; image classification;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
This paper presents a novel approach to launch and defend against the causative and evasion attacks on machine learning classifiers. As the preliminary step, the adversary starts with an exploratory attack based on deep learning (DL) and builds a functionally equivalent classifier by polling the online target classifier with input data and observing the returned labels. Using this inferred classifier, the adversary can select samples according to their DL scores and feed them to the original classifier. In an evasion attack, the adversary feeds the target classifier with test data after selecting samples with DL scores that are close to the decision boundary to increase the chance that these samples are misclassified. In a causative attack, the adversary feeds the target classifier with training data after changing the labels of samples with DL scores that are far away from the decision boundary to reduce the reliability of the training process. Results obtained for text and image classification show that the proposed evasion and causative attacks can significantly increase the error during test and training phases, respectively. A defense strategy is presented to change a small number of labels of the original classifier to prevent its reliable inference by the adversary and its effective use in evasion and causative attacks. These findings identify new vulnerabilities of machine learning and demonstrate that a proactive defense mechanism can reduce the impact of the underlying attacks.
引用
收藏
页码:243 / 248
页数:6
相关论文
共 10 条
[1]  
[Anonymous], IEEE GLOB C SIGN INF
[2]  
[Anonymous], 2016, P IEEE EUR S SEC PRI
[3]  
[Anonymous], 2016, USENIX SECURITY
[4]  
Ateniese Giuseppe, 2015, International Journal of Security and Networks, V10, P137
[5]  
Biggio B., 2013, P EUR C MACH LEARN P
[6]  
Huang L., 2011, P ACM WORKSH ART INT
[7]  
Kurakin A., 2016, ARXIV
[8]  
LASKOV P, 2010, J MACHINE LEARNING, V81, P115, DOI DOI 10.1007/S10994-010-5207-6
[9]  
Miller B., 2014, P ACM WORKSH ART INT
[10]  
Shi Y., 2017, P IEEE S TECHN HOM S