A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks

被引:5
作者
De Lucia, Michael J. [1 ]
Cotton, Chase [2 ]
机构
[1] US Army Res Lab, Network Sci Div, Aberdeen Proving Ground, MD 21005 USA
[2] Univ Delaware, Dept Elect & Comp Engn, Newark, DE USA
来源
PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020 | 2020年
关键词
Adversarial Machine Learning; Machine Learning; Network Security; Cyber Security; Cyber Defense; ENSEMBLE;
D O I
10.1145/3395352.3402627
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The discovery of practical adversarial machine learning (AML) attacks against machine learning-based wired and wireless network security detectors has driven the necessity of a defense. Without a defense mechanism against AML, attacks in wired and wireless networks will go unnoticed by network security classifiers resulting in their ineffectiveness. Therefore, it is essential to motivate a defense against AML attacks for network security classifiers. Existing AML defenses are generally within the context of image recognition. However, these AML defenses have limited transferability to a network security context. Unlike image recognition, a subject matter expert generally derives the features of a network security classifier. Therefore, a network security classifier requires a distinctive strategy for defense. We propose a novel defense-in-depth approach for network security classifiers using a hierarchical ensemble of classifiers, each using a disparate feature set. Subsequently we show the effective use of our hierarchical ensemble to defend an existing network security classifier against an AML attack. Additionally, we discover a novel set of features to detect network scanning activity. Lastly, we propose to enhance our AML defense approach in future work. A shortcoming of our approach is the increased cost to the defender for implementation of each independent classifier. Therefore, we propose combining our AML defense with a moving target defense approach. Additionally, we propose to evaluate our AML defense with a variety of datasets and classifiers and evaluate the effectiveness of decomposing a classifier with many features into multiple classifiers, each with a small subset of the features.
引用
收藏
页码:67 / 73
页数:7
相关论文
共 23 条
  • [1] [Anonymous], 2019, Defense in depth.
  • [2] An analysis of TCP reset behaviour on the Internet
    Arlitt, M
    Williamson, C
    [J]. ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2005, 35 (01) : 37 - 44
  • [3] Wild patterns: Ten years after the rise of adversarial machine learning
    Biggio, Battista
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2018, 84 : 317 - 331
  • [4] One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time
    Biggio, Battista
    Corona, Igino
    He, Zhi-Min
    Chan, Patrick P. K.
    Giacinto, Giorgio
    Yeung, Daniel S.
    Roli, Fabio
    [J]. MULTIPLE CLASSIFIER SYSTEMS (MCS 2015), 2015, 9132 : 168 - 180
  • [5] Multiple classifier systems for robust classifier design in adversarial environments
    Biggio, Battista
    Fumera, Giorgio
    Roli, Fabio
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2010, 1 (1-4) : 27 - 41
  • [6] Biggio B, 2008, LECT NOTES COMPUT SC, V5342, P500, DOI 10.1007/978-3-540-89689-0_54
  • [7] Dalvi N.N., 2004, P 10 ACM SIGKDD INT, P99, DOI [10.1145/ 1014052.1014066, 10.1145/1014052.1014066, DOI 10.1145/1014052.1014066]
  • [8] De Lucia Michael J., 2019, Journal of Information Systems Applied Research, V12, P26
  • [9] De Lucia Michael J., 2020, Doctoral Dissertation.
  • [10] Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572