Enhancing the transferability of adversarial examples on vision transformers

被引:1
作者
Guan, Yujiao [1 ]
Yang, Haoyu [1 ]
Qu, Xiaotong [1 ]
Wang, Xiaodong [1 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao, Peoples R China
关键词
vision transformer; adversarial examples; transferability; image classification; computer vision;
D O I
10.1117/1.JEI.33.2.023039
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The advancement of adversarial attack techniques, particularly against neural network architectures, is a crucial area of research in machine learning. Notably, the emergence of vision transformers (ViTs) as a dominant force in computer vision tasks has opened avenues for exploring their vulnerabilities. In this context, we introduce dual gradient optimization for adversarial transferability (DGO-AT), a comprehensive strategy designed to enhance the transferability of adversarial examples in ViTs. DGO-AT incorporates two innovative components: attention gradient smoothing (AGS) and multi-layer perceptron gradient random dropout (GRD-MLP). AGS targets the attention layers of ViTs to smooth gradients and reduce noise, focusing on global features for improved transferability. GRD-MLP, on the other hand, introduces stochasticity into MLP gradient updates, broadening the adversarial examples' applicability. The synergy of these strategies in DGO-AT addresses the unique structural aspects of ViTs, leading to more effective and transferable adversarial attacks. Our comprehensive evaluations of a variety of ViT and CNN models, using the ImageNet dataset, demonstrate that DGO-AT significantly enhances the effectiveness and transferability of attacks, thereby contributing to the ongoing discourse on the adversarial robustness of advanced neural network models. (c) 2024 SPIE and IS&T
引用
收藏
页数:16
相关论文
共 50 条
[31]   Generation and Countermeasures of adversarial examples on vision: a survey [J].
Liu, Jiangfan ;
Li, Yishan ;
Guo, Yanming ;
Liu, Yu ;
Tang, Jun ;
Nie, Ying .
ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
[32]   Gradient Aggregation Boosting Adversarial Examples Transferability Method [J].
Deng, Shiyun ;
Ling, Jie .
Computer Engineering and Applications, 2024, 60 (14) :275-282
[33]   Improving the transferability of adversarial examples via direction tuning [J].
Yang, Xiangyuan ;
Lin, Jie ;
Zhang, Hanlin ;
Yang, Xinyu ;
Zhao, Peng .
INFORMATION SCIENCES, 2023, 647
[34]   Mixup Virtual Adversarial Training for Robust Vision Transformers [J].
Shi, Weili ;
Li, Sheng .
IEEE TRANSACTIONS ON BIG DATA, 2025, 11 (03) :1309-1320
[35]   Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing [J].
Xie, Pengfei ;
Shi, Shuhao ;
Yang, Shuai ;
Qiao, Kai ;
Liang, Ningning ;
Wang, Linyuan ;
Chen, Jian ;
Hu, Guoen ;
Yan, Bin .
FRONTIERS IN NEUROROBOTICS, 2021, 15
[36]   Black-box Attacks on Spoofing Countermeasures Using Transferability of Adversarial Examples [J].
Zhang, Yuekai ;
Jiang, Ziyan ;
Villalba, Jesus ;
Dehak, Najim .
INTERSPEECH 2020, 2020, :4238-4242
[37]   Improving the transferability of adversarial examples through black-box feature attacks [J].
Wang, Maoyuan ;
Wang, Jinwei ;
Ma, Bin ;
Luo, Xiangyang .
NEUROCOMPUTING, 2024, 595
[38]   Enhancing the Transferability of Adversarial Point Clouds by Initializing Transferable Adversarial Noise [J].
Chen, Hai ;
Zhao, Shu ;
Yan, Yuanting ;
Qian, Fulan .
IEEE SIGNAL PROCESSING LETTERS, 2025, 32 :201-205
[39]   Enhancing the Transferability of Adversarial Attacks via Multi-Feature Attention [J].
Zheng, Desheng ;
Ke, Wuping ;
Li, Xiaoyu ;
Duan, Yaoxin ;
Yin, Guangqiang ;
Min, Fan .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 :1462-1474
[40]   Boosting the transferability of adversarial examples via stochastic serial attack [J].
Hao, Lingguang ;
Hao, Kuangrong ;
Wei, Bing ;
Tang, Xue-song .
NEURAL NETWORKS, 2022, 150 :58-67