The advancement of deep learning (DL) techniques has transformed various industries, boosting the number of interconnected systems. In Adversarial Machine Learning (AML), adversaries intend to fool Machine Learning (ML) and Deep Learning (DL) models into creating false predictions with intentionally crafted adversarial examples. Due to this, ML and DL-based models are susceptible to adversarial attacks, posing significant challenges for adoption in real-world systems such as IDS. This study proposes a novel hybrid defense model to evaluate the black-box adversarial transferability concept on cybersecurity attack detection. Surrogate and target models are used to validate this concept thoroughly. The proposed model consists of heuristic-based defense methods in the training and testing phases. It incorporates data preprocessing via quantile transformation and feature extraction using kernel principal component analysis (PCA) for nonlinear dimensionality reduction. Two well-known adversarial attack generation methods, i.e., FGSM (Fast Gradient Sign Method) and UAP (Universal Adversarial Perturbation), are employed, and three distinct scenarios are presented for evaluation. The outcome demonstrates an accuracy of 99.29 %, precision of 99.61 %, recall of 99.54 %, ASR of 0.18 %, TPR of 99.32 %, and specificity of 98.65 % using the UAP method. We further evaluated using a balanced dataset and explored latency, model size, and computational cost for real-time applicability. The study's outcome signifies that DL-based models are highly vulnerable to adversarial attacks even though adversaries have no access to the internal details of the target system. The presented study can aid management by creating effective adversarial attack detection strategies to enhance cyberattack detection systems. The study significantly contributes to the IS knowledge base and provides future direction for new researchers to explore, develop, and extend the current studies.