Black-box adversarial transferability: An empirical study in cybersecurity perspective

被引:2
作者
Roshan, Khushnaseeb [1 ]
Zafar, Aasim [1 ]
机构
[1] Aligarh Muslim Univ Cent Univ, Dept Comp Sci, Aligarh 202002, India
关键词
Cyber attack detection; Deep neural network; Adversarial machine learning; Adversarial defence; Network security; INTRUSION DETECTION SYSTEMS; SECURITY; ATTACKS; ROBUSTNESS;
D O I
10.1016/j.cose.2024.103853
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid advancement of artificial intelligence within the realm of cybersecurity raises significant security concerns. The vulnerability of deep learning models in adversarial attacks is one of the major issues. In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase. Subsequently, it reduces the model confidence score and results in incorrect classifications. The novel key contribution of the research is to empirically test the blackbox adversarial transferability phenomena in cyber attack detection systems. It indicates that the adversarial perturbation input generated through the surrogate model has a similar impact on the target model in producing the incorrect classification. To empirically validate this phenomenon, surrogate and target models are used. The adversarial perturbation inputs are generated based on the surrogate-model for which the hacker has complete information. Based on these adversarial perturbation inputs, both surrogate and target models are evaluated during the inference phase. We have done extensive experimentation over the CICDDoS-2019 dataset, and the results are classified in terms of various performance metrics like accuracy, precision, recall and f1-score. The findings indicate that any deep learning model is highly susceptible to adversarial attacks, even if the attacker does not have access to the internal details of the target model. The results also indicate that white-box adversarial attacks have a severe impact compared to black-box adversarial attacks. There is a need to investigate and explore adversarial defence techniques to increase the robustness of the deep learning models against adversarial attacks.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
    Akhtar, Naveed
    Mian, Ajmal
    [J]. IEEE ACCESS, 2018, 6 : 14410 - 14430
  • [2] Alatwi HA, 2021, Arxiv, DOI arXiv:2112.03315
  • [3] Adversarial machine learning in Network Intrusion Detection Systems
    Alhajjar, Elie
    Maxwell, Paul
    Bastian, Nathaniel
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 186
  • [4] Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems
    Anthi, Eirini
    Williams, Lowri
    Rhode, Matilda
    Burnap, Pete
    Wedgbury, Adam
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58
  • [5] The security of machine learning
    Barreno, Marco
    Nelson, Blaine
    Joseph, Anthony D.
    Tygar, J. D.
    [J]. MACHINE LEARNING, 2010, 81 (02) : 121 - 148
  • [6] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [7] CICIDS, 2017, Datasets | Research
  • [8] Rallying Adversarial Techniques against Deep Learning for Network Security
    Clements, Joseph
    Yang, Yuzhe
    Sharma, Ankur A.
    Hu, Hongxin
    Lao, Yingjie
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [9] Serban AC, 2019, Arxiv, DOI arXiv:1810.01185
  • [10] Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues
    Corona, Igino
    Giacinto, Giorgio
    Roli, Fabio
    [J]. INFORMATION SCIENCES, 2013, 239 : 201 - 225