Black-box adversarial attack defense approach: An empirical analysis from cybersecurity perceptive

被引:1
作者
Barik, Kousik [1 ]
Misra, Sanjay [2 ]
Lopez-Baldominos, Ines [1 ]
机构
[1] Univ Alcala, Dept Comp Sci, Madrid, Spain
[2] Inst Energy Technol, Dept Appl Data Sci, Halden, Norway
关键词
Adversarial machine learning; Cybersecurity; Back-box attack; Adversarial defense; Deep learning; SYSTEMS;
D O I
10.1016/j.rineng.2025.105177
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The advancement of deep learning (DL) techniques has transformed various industries, boosting the number of interconnected systems. In Adversarial Machine Learning (AML), adversaries intend to fool Machine Learning (ML) and Deep Learning (DL) models into creating false predictions with intentionally crafted adversarial examples. Due to this, ML and DL-based models are susceptible to adversarial attacks, posing significant challenges for adoption in real-world systems such as IDS. This study proposes a novel hybrid defense model to evaluate the black-box adversarial transferability concept on cybersecurity attack detection. Surrogate and target models are used to validate this concept thoroughly. The proposed model consists of heuristic-based defense methods in the training and testing phases. It incorporates data preprocessing via quantile transformation and feature extraction using kernel principal component analysis (PCA) for nonlinear dimensionality reduction. Two well-known adversarial attack generation methods, i.e., FGSM (Fast Gradient Sign Method) and UAP (Universal Adversarial Perturbation), are employed, and three distinct scenarios are presented for evaluation. The outcome demonstrates an accuracy of 99.29 %, precision of 99.61 %, recall of 99.54 %, ASR of 0.18 %, TPR of 99.32 %, and specificity of 98.65 % using the UAP method. We further evaluated using a balanced dataset and explored latency, model size, and computational cost for real-time applicability. The study's outcome signifies that DL-based models are highly vulnerable to adversarial attacks even though adversaries have no access to the internal details of the target system. The presented study can aid management by creating effective adversarial attack detection strategies to enhance cyberattack detection systems. The study significantly contributes to the IS knowledge base and provides future direction for new researchers to explore, develop, and extend the current studies.
引用
收藏
页数:23
相关论文
共 68 条
[61]   AdaBoost-CNN: An adaptive boosting algorithm for convolutional neural networks to classify multi -class imbalanced datasets using transfer learning [J].
Taherkhani, Aboozar ;
Cosma, Georgina ;
McGinnity, T. M. .
NEUROCOMPUTING, 2020, 404 :351-366
[62]   Deep Neural Network Watermarking against Model Extraction Attack [J].
Tan, Jingxuan ;
Zhong, Nan ;
Qian, Zhenxing ;
Zhang, Xinpeng ;
Li, Sheng .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, :1588-1597
[63]   Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks [J].
Xu, Weilin ;
Evans, David ;
Qi, Yanjun .
25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
[64]   A simple framework to enhance the adversarial robustness of deep learning-based intrusion detection system [J].
Yuan, Xinwei ;
Han, Shu ;
Huang, Wei ;
Ye, Hongliang ;
Kong, Xianglong ;
Zhang, Fan .
COMPUTERS & SECURITY, 2024, 137
[65]  
Zhang CN, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P4687
[66]   Adversarial Attacks Against Deep Learning-Based Network Intrusion Detection Systems and Defense Mechanisms [J].
Zhang, Chaoyun ;
Costa-Perez, Xavier ;
Patras, Paul .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (03) :1294-1311
[67]   A Brute-Force Black-Box Method to Attack Machine Learning-Based Systems in Cybersecurity [J].
Zhang, Sicong ;
Xie, Xiaoyao ;
Xu, Yang .
IEEE ACCESS, 2020, 8 :128250-128263
[68]   Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity [J].
Zhou, Shuai ;
Liu, Chi ;
Ye, Dayong ;
Zhu, Tianqing ;
Zhou, Wanlei ;
Yu, Philip S. .
ACM COMPUTING SURVEYS, 2023, 55 (08)