A Feature Map Adversarial Attack Against Vision Transformers

被引:0
作者
Altoub, Majed [1 ]
Mehmood, Rashid [2 ]
AlQurashi, Fahad [1 ]
Alqahtany, Saad [2 ]
Alsulami, Bassma [1 ]
机构
[1] King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia
[2] Islamic Univ Madinah, Fac Comp & Informat Syst, Dept Comp Sci, Madinah 42351, Saudi Arabia
关键词
Vision transformers; adversarial attacks; DNNs; vulnerabilities; feature maps; perturbations; spatial domains; frequency domains;
D O I
10.14569/IJACSA.2024.0151097
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
IMAGE classification is a domain where Deep Neural Networks (DNNs) have demonstrated remarkable achievements. Recently, Vision Transformers (ViTs) have shown potential in handling large-scale image classification challenges by efficiently scaling to higher resolutions and accommodating larger input sizes compared to traditional Convolutional Neural Networks (CNNs). However, in the context of adversarial attacks, ViTs are still considered vulnerable. Feature maps serve as the foundation for representing and extracting meaningful information from images. While CNNs excel at capturing local features and spatial relationships, ViTs are better at understanding global context and long-range dependencies. This paper proposes a feature map ViT-specific adversarial example attack called Feature Map ViTspecific Attack (FMViTA). The objective of the investigation is to generate adversarial perturbations in the spatial and frequency domains of the image representation that allow deeper distance measurement between perturbed and targeted images. The experiments focus on a ViT pre-trained model that is fine-tuned on the ImageNet dataset. The proposed attack demonstrates the vulnerability of ViTs to adversarial examples by showing that even allowing only 0.02 maximum perturbation magnitude to be added to the input samples gives 100% attack success rate.
引用
收藏
页码:962 / 968
页数:7
相关论文
共 31 条
  • [1] Aldahdooh A., 2021, arXiv
  • [2] An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks
    Altoub, Majed
    AlQurashi, Fahad
    Yigitcanlar, Tan
    Corchado, Juan M.
    Mehmood, Rashid
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [3] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [4] Dong YP, 2018, Arxiv, DOI arXiv:1710.06081
  • [5] Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
  • [6] AdvDrop: Adversarial Attack to DNNs by Dropping Information
    Duan, Ranjie
    Chen, Yuefeng
    Niu, Dantong
    Yang, Yun
    Qin, A. K.
    He, Yuan
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7486 - 7495
  • [7] Guo C., 2018, arXiv
  • [8] Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572
  • [9] Jin ZB, 2024, Arxiv, DOI arXiv:2408.12670
  • [10] Joshi A., 2021, arXiv