Black-box adversarial transferability: An empirical study in cybersecurity perspective

被引:2
作者
Roshan, Khushnaseeb [1 ]
Zafar, Aasim [1 ]
机构
[1] Aligarh Muslim Univ Cent Univ, Dept Comp Sci, Aligarh 202002, India
关键词
Cyber attack detection; Deep neural network; Adversarial machine learning; Adversarial defence; Network security; INTRUSION DETECTION SYSTEMS; SECURITY; ATTACKS; ROBUSTNESS;
D O I
10.1016/j.cose.2024.103853
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid advancement of artificial intelligence within the realm of cybersecurity raises significant security concerns. The vulnerability of deep learning models in adversarial attacks is one of the major issues. In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase. Subsequently, it reduces the model confidence score and results in incorrect classifications. The novel key contribution of the research is to empirically test the blackbox adversarial transferability phenomena in cyber attack detection systems. It indicates that the adversarial perturbation input generated through the surrogate model has a similar impact on the target model in producing the incorrect classification. To empirically validate this phenomenon, surrogate and target models are used. The adversarial perturbation inputs are generated based on the surrogate-model for which the hacker has complete information. Based on these adversarial perturbation inputs, both surrogate and target models are evaluated during the inference phase. We have done extensive experimentation over the CICDDoS-2019 dataset, and the results are classified in terms of various performance metrics like accuracy, precision, recall and f1-score. The findings indicate that any deep learning model is highly susceptible to adversarial attacks, even if the attacker does not have access to the internal details of the target model. The results also indicate that white-box adversarial attacks have a severe impact compared to black-box adversarial attacks. There is a need to investigate and explore adversarial defence techniques to increase the robustness of the deep learning models against adversarial attacks.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Roshan Khushnaseeb, 2022, Cyber Security and Digital Forensics: Proceedings of ICCSDF 2021. Lecture Notes on Data Engineering and Communications Technologies (73), P551, DOI 10.1007/978-981-16-3961-6_45
  • [32] Roshan Khushnaseeb, 2022, 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), P74, DOI 10.23919/INDIACom54597.2022.9763241
  • [33] Roshan K., 2021, 5 INT C INF SYST COM, P1, DOI DOI 10.1109/ISCON52037.2021.9702437
  • [34] Roshan K., 2021, SHAP. Int. J. Comput. Networks Commun., V13, P109, DOI [10.5121/ijcnc.2021.13607, DOI 10.5121/IJCNC.2021.13607]
  • [35] Untargeted white-box adversarial attack with heuristic defence methods in real-time deep learning based network intrusion detection system
    Roshan, Khushnaseeb
    Zafar, Aasim
    Ul Haque, Shiekh Burhan
    [J]. COMPUTER COMMUNICATIONS, 2024, 218 : 97 - 113
  • [36] RAIDS: Robust autoencoder-based intrusion detection system model against adversarial attacks
    Sarikaya, Alper
    Kilic, Banu Gunel
    Demirci, Mehmet
    [J]. COMPUTERS & SECURITY, 2023, 135
  • [37] A context-aware robust intrusion detection system: a reinforcement learning-based approach
    Sethi, Kamalakanta
    Rupesh, E. Sai
    Kumar, Rahul
    Bera, Padmalochan
    Madhav, Y. Venu
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2020, 19 (06) : 657 - 678
  • [38] Sharafaldin I, 2019, INT CARN CONF SECU
  • [39] A Survey on Machine Learning Techniques for Cyber Security in the Last Decade
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    Hameed, Ibrahim A.
    Xu, Min
    [J]. IEEE ACCESS, 2020, 8 : 222310 - 222354
  • [40] White-box inference attack: compromising the security of deep learning-based COVID-19 diagnosis systems
    Sheikh B.U.H.
    Zafar A.
    [J]. International Journal of Information Technology, 2024, 16 (3) : 1475 - 1483