You Only Attack Once: Single-Step DeepFool Algorithm

被引:0
作者
Li, Jun [1 ]
Xu, Yanwei [1 ]
Hu, Yaocun [1 ]
Ma, Yongyong [1 ]
Yin, Xin [1 ]
机构
[1] Jilin Univ Finance & Econ, Sch Management Sci & Informat Engn, Changchun 130117, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 01期
关键词
adversarial examples; adversarial attacks; deep learning; computer vision;
D O I
10.3390/app15010302
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Adversarial attacks expose the latent vulnerabilities within artificial intelligence systems, necessitating a reassessment and enhancement of model robustness to ensure the reliability and security of deep learning models against malicious attacks. We propose a fast method designed to efficiently find sample points close to the decision boundary. By computing the gradient information of each class in the input samples and comparing these gradient differences with the true class, we can identify the target class most sensitive to the decision boundary, thus generating adversarial examples. This technique is referred to as the "You Only Attack Once" (YOAO) algorithm. Compared to the DeepFool algorithm, this method requires only a single iteration to achieve effective attack results. The experimental results demonstrate that the proposed algorithm outperforms the original approach in various scenarios, especially in resource-constrained environments. Under a single iteration, it achieves a 70.6% higher success rate of the attacks compared to the DeepFool algorithm. Our proposed method shows promise for widespread application in both offensive and defensive strategies for diverse deep learning models. We investigated the relationship between classifier accuracy and adversarial attack success rate, comparing the algorithm with others. Our experiments validated that the proposed algorithm exhibits higher attack success rates and efficiency. Furthermore, we performed data visualization on the ImageNet dataset, demonstrating that the proposed algorithm focuses more on attacking important features. Finally, we discussed the existing issues with the algorithm and outlined future research directions. Our code will be made public upon acceptance of the paper.
引用
收藏
页数:18
相关论文
共 33 条
[1]   Integrating Spatial Information into Global Context: Summary Vision Transformer (S-ViT) [J].
Ali, Mohsin ;
Raza, Haider ;
Gan, John Q. ;
Haris, Muhammad .
2024 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS, DICTA, 2024, :206-213
[2]   A survey on adversarial attacks and defences [J].
Chakraborty, Anirban ;
Alam, Manaar ;
Dey, Vishal ;
Chattopadhyay, Anupam ;
Mukhopadhyay, Debdeep .
CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2021, 6 (01) :25-45
[3]   How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses [J].
Costa, Joana C. ;
Roxo, Tiago ;
Proenca, Hugo ;
Inacio, Pedro Ricardo Morais .
IEEE ACCESS, 2024, 12 :61113-61136
[4]  
Croce F., P INT C MACHINE LEAR
[5]  
Deng Y., 2024, Adv. Neural Inf. Process. Syst., V36, P58075
[6]  
Labib SMFR, 2024, Arxiv, DOI [arXiv:2310.13019, arXiv:2310.13019]
[7]   Generating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks [J].
Gajjar, Shivangi ;
Hati, Avik ;
Bhilare, Shruti ;
Mandal, Srimanta .
2022 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS, SPCOM, 2022,
[8]  
Gao LL, 2021, Arxiv, DOI arXiv:2012.15503
[9]  
Goodfellow IJ, 2015, Arxiv, DOI [arXiv:1412.6572, DOI 10.48550/ARXIV.1412.6572]
[10]   Adversarial example denoising and detection based on the consistency between Fourier-transformed layers [J].
Jung, Seunghwan ;
Kim, Heeyeon ;
Chung, Minyoung ;
Shin, Yeong-Gil .
NEUROCOMPUTING, 2024, 606