Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging

被引:1
|
作者
Kanca, Elif [1 ]
Ayas, Selen [2 ]
Kablan, Elif Baykal [1 ]
Ekinci, Murat [2 ]
机构
[1] Karadeniz Tech Univ, Dept Software Engn, Trabzon, Turkiye
[2] Karadeniz Tech Univ, Dept Comp Engn, Trabzon, Turkiye
关键词
Adversarial attacks; Adversarial defense; Vision transformer; Medical image classification; DIABETIC-RETINOPATHY; VALIDATION;
D O I
10.1007/s11517-024-03226-5
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.Graphical Abstract(left).
引用
收藏
页码:673 / 690
页数:18
相关论文
共 50 条
  • [1] Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging (OCT, 10.1007/s11517-024-03226-5, 2024)
    Kanca, Elif
    Ayas, Selen
    Kablan, Elif Baykal
    Ekinci, Murat
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, : 691 - 691
  • [2] Enhancing Robustness Against Adversarial Attacks in Multimodal Emotion Recognition With Spiking Transformers
    Chen, Guoming
    Qian, Zhuoxian
    Zhang, Dong
    Qiu, Shuang
    Zhou, Ruqi
    IEEE ACCESS, 2025, 13 : 34584 - 34597
  • [3] Enhancing the adversarial robustness in medical image classification: exploring adversarial machine learning with vision transformers-based models
    Elif Kanca Gulsoy
    Selen Ayas
    Elif Baykal Kablan
    Murat Ekinci
    Neural Computing and Applications, 2025, 37 (12) : 7971 - 7989
  • [4] On the Robustness of Vision Transformers to Adversarial Examples
    Mahmood, Kaleel
    Mahmood, Rigel
    van Dijk, Marten
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7818 - 7827
  • [5] Enhancing the robustness of QMIX against state-adversarial attacks
    Guo, Weiran
    Liu, Guanjun
    Zhou, Ziyuan
    Wang, Ling
    Wang, Jiacun
    NEUROCOMPUTING, 2024, 572
  • [6] Enhancing Model Robustness Against Adversarial Attacks with an Anti-adversarial Module
    Qin, Zhiquan
    Liu, Guoxing
    Lin, Xianming
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 66 - 78
  • [7] Enhancing the robustness of vision transformer defense against adversarial attacks based on squeeze-and-excitation module
    Chang, YouKang
    Zhao, Hong
    Wang, Weijie
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [8] Camouflage Is All You Need: Evaluating and Enhancing Transformer Models Robustness Against Camouflage Adversarial Attacks
    Huertas-Garcia, Alvaro
    Martin, Alejandro
    Huertas-Tato, Javier
    Camacho, David
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 431 - 443
  • [9] Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training
    Ingle, Ganesh
    Pawale, Sanjesh
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (03) : 1210 - 1228
  • [10] Evaluating Robustness Against Adversarial Attacks: A Representational Similarity Analysis Approach
    Liu, Chenyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,