Black Box Attacks on Deep Anomaly Detectors

被引:35
作者
Kuppa, Aditya [1 ,2 ]
Grzonkowski, Slawomir [2 ]
Asghar, Muhammad Rizwan [3 ]
Le-Khac, Nhien-An [1 ]
机构
[1] Univ Coll, Dublin, Ireland
[2] Symantec Corp, Tempe, AZ 85281 USA
[3] Univ Auckland, Auckland, New Zealand
来源
14TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY (ARES 2019) | 2019年
关键词
Black box attacks; Anomaly detection; Neural networks;
D O I
10.1145/3339252.3339266
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The process of identifying the true anomalies from a given set of data instances is known as anomaly detection. It has been applied to address a diverse set of problems in multiple application domains including cybersecurity. Deep learning has recently demonstrated state-of-the-art performance on key anomaly detection applications, such as intrusion detection, Denial of Service (DoS) attack detection, security log analysis, and malware detection. Despite the great successes achieved by neural network architectures, models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. Recent approaches in the literature have focused on three different areas: (a) generating adversarial examples in supervised machine learning in multiple domains; (b) countering the attacks with various defenses; (c) theoretical guarantees on the robustness of machine learning models by understanding their security properties. However, they have not covered, from the perspective of the anomaly detection task in a black box setting. The exploration of black box attack strategies, which reduce the number of queries for finding adversarial examples with high probability, is an important problem. In this paper, we study the security of black box deep anomaly detectors with a realistic threat model. We propose a novel black box attack in query constraint settings. First, we run manifold approximation on samples collected at attacker end for query reduction and understanding various thresholds set by underlying anomaly detector, and use spherical adversarial subspaces to generate attack samples. This method is well suited for attacking anomaly detectors where decision boundaries of nominal and abnormal classes are not very well defined and decision process is done with a set of thresholds on anomaly scores. We validate our attack on state-of-the-art deep anomaly detectors and show that the attacker goal is achieved under constraint settings. Our evaluation of the proposed approach shows promising results and demonstrates that our strategy can be successfully used against other anomaly detectors.
引用
收藏
页数:10
相关论文
共 59 条
[1]   Enabling Trust in Deep Learning Models: A Digital Forensics Case Study [J].
Aditya, K. ;
Grzonkowski, Slawomir ;
Lekhac, NhienAn .
2018 17TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (IEEE TRUSTCOM) / 12TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (IEEE BIGDATASE), 2018, :1250-1255
[2]  
Aditya K, 2019, 18 IEEE INT C TRUST
[3]   Adversarial Deep Learning for Robust Detection of Binary Encoded Malware [J].
Al-Dujaili, Abdullah ;
Huang, Alex ;
Hemberg, Erik ;
O'reilly, Una-May .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :76-82
[4]  
Anderson Hyrum S, 2017, BLACK HAT
[5]  
[Anonymous], 2017, ARXIV171208713
[6]  
[Anonymous], 2018, ABS181102054 ARXIV
[7]  
Barreno M, 2006, P 2006 ACM S INF COM, P16
[8]   Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms [J].
Bhagoji, Arjun Nitin ;
He, Warren ;
Li, Bo ;
Song, Dawn .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :158-174
[9]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[10]   Collective Anomaly Detection Based on Long Short-Term Memory Recurrent Neural Networks [J].
Bontemps, Loic ;
Van Loi Cao ;
McDermott, James ;
Nhien-An Le-Khac .
FUTURE DATA AND SECURITY ENGINEERING, FDSE 2016, 2016, 10018 :141-152