Poisoning attacks on machine learning models in cyber systems and mitigation strategies

被引:1
|
作者
Izmailov, Rauf [1 ]
Venkatesan, Sridhar [1 ]
Reddy, Achyut [1 ]
Chadha, Ritu [1 ]
De Lucia, Michael [2 ]
Oprea, Alina [3 ]
机构
[1] Peraton Labs Inc, Basking Ridge, NJ 07920 USA
[2] DEVCOM Army Res Lab, Aberdeen Proving Ground, MD USA
[3] Northeastern Univ, Boston, MA 02115 USA
来源
DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VI | 2022年 / 12117卷
关键词
Machine learning; adversarial machine learning; network intrusion detection; data poisoning; data cleaning; classifier; poisoning attack;
D O I
10.1117/12.2622112
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Poisoning attacks on training data are becoming one of the top concerns among users of machine learning systems. The goal of such attacks is to inject a small set of maliciously mislabeled training data into the training pipeline so as to detrimentally impact a machine learning model trained on such data. Constructing such attacks for cyber applications is especially challenging due to their realizability constraints. Furthermore, poisoning mitigation techniques for such applications are also not well understood. This paper investigates techniques for realizable data poisoning availability attacks (using several cyber applications), in which an attacker can insert a set of poisoned samples at the training time with the goal of degrading the accuracy of the deployed model. We design a white-box, realizable poisoning attack that degraded the original model's accuracy by generating mislabeled samples in close vicinity of a selected subset of training points. We investigate this strategy and its modifications for key classifier architectures and provide specific implications for each of them. The paper also proposes a novel data cleaning method as a defense against such poisoning attacks. Our defense includes a diversified ensemble of classifiers, each trained on a different subset of the training set. We use the disagreement of the classifiers' predictions as a decision whether to keep a given sample in the training dataset or remove it. The results demonstrate the efficiency of this strategy with very limited performance penalty.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning
    Tian, Zhiyi
    Cui, Lei
    Liang, Jie
    Yu, Shui
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [22] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [23] Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare
    Mozaffari-Kermani, Mehran
    Sur-Kolay, Susmita
    Raghunathan, Anand
    Jha, Niraj K.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2015, 19 (06) : 1893 - 1905
  • [24] Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach
    Chen, Sen
    Xue, Minhui
    Fan, Lingling
    Hao, Shuang
    Xu, Lihua
    Zhu, Haojin
    Li, Bo
    COMPUTERS & SECURITY, 2018, 73 : 326 - 344
  • [25] Detection of cyber attacks in smart grids using SVM-boosted machine learning models
    Hathal Salamah Alwageed
    Service Oriented Computing and Applications, 2022, 16 : 313 - 326
  • [26] Detection of cyber attacks in smart grids using SVM-boosted machine learning models
    Alwageed, Hathal Salamah
    SERVICE ORIENTED COMPUTING AND APPLICATIONS, 2022, 16 (04) : 313 - 326
  • [27] Detection of IoT Botnet Cyber Attacks Using Machine Learning
    Khaleefah A.D.
    Al-Mashhadi H.M.
    Informatica (Slovenia), 2023, 47 (06): : 55 - 64
  • [28] Adversarial attacks on machine learning-based cyber security systems: a survey of techniques and defences
    Patel, Pratik S.
    Panchal, Pooja
    INTERNATIONAL JOURNAL OF ELECTRONIC SECURITY AND DIGITAL FORENSICS, 2025, 17 (1-2)
  • [29] Internet of Things Cyber Attacks Detection using Machine Learning
    Alsamiri, Jadel
    Alsubhi, Khalid
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2019, 10 (12) : 627 - 634
  • [30] Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance
    Stokes, Jack W.
    England, Paul
    Kane, Kevin
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,