Crafting Targeted Universal Adversarial Perturbations: Considering Images as Noise

被引:0
作者
Wang, Huijiao [1 ]
Cai, Ding [1 ]
Wang, Li [1 ]
Xiong, Zhuo [1 ]
机构
[1] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Peoples R China
关键词
Perturbation methods; Optimization; Transformers; Training; Computational modeling; Task analysis; Robustness; Adversarial machine learning; Deep learning; Neural networks; Image processing; Noise measurement; Targeted universal adversarial perturbation; adversarial example; deep neural network; transformer; image as noise; proxy dataset;
D O I
10.1109/ACCESS.2023.3335094
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The vulnerability of Deep Neural Networks (DNNs) to adversarial perturbations has been demonstrated in a large body of research. Compared to image-dependent adversarial perturbations, universal adversarial perturbations(UAPs) is more challenging for indiscriminately attacking the model inputs. However, there are few studies on generating data-free targeted UAPs and the targeted attack success rate of the latest method remains unsatisfactory. Not only that, fewer studies have implemented their approach on Transformers and its efficacy remains uncertain. Therefore, a novel method denoted as Denoising Targeted UAP (DT-UAP) is proposed in this paper that considers the training input as the noise, and takes the input of the last layer into calculation. Specifically, the proposed method minimizes the distance between perturbations and adversarial examples, then incorporates a targeted loss function to generate targeted universal adversarial perturbations for different DNNs and Transformers based on different proxy datasets. DT-UAP has achieved an average improvement of 5% to 10% in terms of both fooling rate and targeted fooling rate comparing to the most recent method for generating targeted universal adversarial perturbation with proxy dataset for DNNs. Additionally, DT-UAP has also achieved a targeted attack success rate of over 80% on Transformers such as MaxVit and SwinTransformer.
引用
收藏
页码:131651 / 131660
页数:10
相关论文
共 50 条
[1]  
Athalye A, 2018, PR MACH LEARN RES, V80
[2]   Targeted Universal Adversarial Examples for Remote Sensing [J].
Bai, Tao ;
Wang, Hao ;
Wen, Bihan .
REMOTE SENSING, 2022, 14 (22)
[3]  
Benz P., 2020, AS C COMP VIS
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[6]   Frequency-Tuned Universal Adversarial Attacks on Texture Recognition [J].
Deng, Yingpeng ;
Karam, Lina J. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :5856-5868
[7]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[8]   Adversarial scratches: Deployable attacks to CNN classifiers [J].
Giulivi, Loris ;
Jere, Malhar ;
Rossi, Loris ;
Koushanfar, Farinaz ;
Ciocarlie, Gabriela ;
Hitaj, Briland ;
Boracchi, Giacomo .
PATTERN RECOGNITION, 2023, 133
[9]  
Gupta T, 2019, Arxiv, DOI arXiv:1912.00466
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778