Unveiling Hidden Variables in Adversarial Attack Transferability on Pre-Trained Models for COVID-19 Diagnosis

被引:0
作者
Akhtom, Dua'a [1 ]
Singh, Manmeet Mahinderjit [2 ]
Xinying, Chew [2 ]
机构
[1] Univ Sains Malaysia, Sch Comp Sci, Gelugor 11700, Pulau Pinang, Malaysia
[2] Univ Sains Malaysia, Gelugor 11700, Pulau Pinang, Malaysia
关键词
Adversarial attack; advanced persistent threat; trained model; robust DL; transferable attack;
D O I
10.14569/IJACSA.2024.01511131
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial attacks represent a significant threat to the robustness and reliability of deep learning models, particularly in high-stakes domains such as medical diagnostics. Advanced Persistent Threat (APT) attacks, characterized by their stealth, complexity, and persistence, exploit adversarial examples to undermine the integrity of AI-driven healthcare systems, posing severe risks to their operational security. This study examines the transferability of adversarial attacks across pre-trained models deployed for COVID-19 diagnosis. Using two prominent convolutional neural networks (CNNs), ResNet50 and EfficientNet-B0, this study explores critical factors that influence the transferability of adversarial perturbations, a vulnerability that could be strategically exploited by APT attackers. By investigating the roles of model architecture, pre-training dataset characteristics, and adversarial attack mechanisms, this research provides valuable insights into the propagation of adversarial examples in medical imaging. Experimental results demonstrate that specific model architectures exhibit varying levels of susceptibility to adversarial transferability. ResNet50, with its deeper layers and residual connections, displayed enhanced robustness against adversarial perturbations, whereas EfficientNet-B0, due to its distinct feature extraction strategy, was more vulnerable to perturbations crafted using ResNet50's gradients. These findings underscore the influence of architectural design on a model's resilience to adversarial attacks. By advancing the understanding of adversarial robustness in medical AI applications, this study offers actionable guidelines for mitigating the risks associated with adversarial examples and emerging threats, such as APT attacks, in real-world healthcare scenarios.
引用
收藏
页码:1343 / 1350
页数:8
相关论文
共 23 条
[1]  
[Anonymous], Covid-19 radiography database
[2]   Exploring the Effect of Adversarial Attacks on Deep Learning Architectures for X-Ray Data [J].
Bankole-Hameed, Ilyas ;
Parikh, Arav ;
Harguess, Josh .
2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
[3]   Adversarial attack vulnerability of medical image analysis systems: Unexplored factors [J].
Bortsova, Gerda ;
Gonzalez-Gonzalo, Cristina ;
Wetstein, Suzanne C. ;
Dubost, Florian ;
Katramados, Ioannis ;
Hogeweg, Laurens ;
Liefers, Bart ;
van Ginneken, Bram ;
Pluim, Josien P. W. ;
Veta, Mitko ;
Sanchez, Clara, I ;
de Bruijne, Marleen .
MEDICAL IMAGE ANALYSIS, 2021, 73
[4]   Adversarial robustness improvement for deep neural networks [J].
Eleftheriadis, Charis ;
Symeonidis, Andreas ;
Katsaros, Panagiotis .
MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]  
Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572
[7]   Noise-robustness test for ultrasound breast nodule neural network models as medical devices [J].
Jiang, Jiaxin ;
Jiang, Xiaoya ;
Xu, Lei ;
Zhang, Yan ;
Zheng, Yuwen ;
Kong, Dexing .
FRONTIERS IN ONCOLOGY, 2023, 13
[8]  
Juodelyte D, 2024, Arxiv, DOI arXiv:2403.04484
[9]  
Liu YP, 2017, Arxiv, DOI arXiv:1611.02770
[10]   Imbalanced gradients: a subtle cause of overestimated adversarial robustness [J].
Ma, Xingjun ;
Jiang, Linxi ;
Huang, Hanxun ;
Weng, Zejia ;
Bailey, James ;
Jiang, Yu-Gang .
MACHINE LEARNING, 2024, 113 (05) :2301-2326