Spatial-frequency gradient fusion based model augmentation for high transferability adversarial attack

被引:3
作者
Pang, Jingfa [1 ,2 ]
Yuan, Chengsheng [1 ,2 ]
Xia, Zhihua [3 ]
Li, Xinting [1 ,2 ,4 ]
Fu, Zhangjie
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[3] Jinan Univ, Coll Cyber Secur, Guangzhou 510632, Peoples R China
[4] Natl Univ Def Technol, Sch Foreign Languages, Nanjing 210039, Peoples R China
关键词
Deep neural network; Adversarial example; Black-box attack; Model augmentation; Spatial-frequency gradient fusion;
D O I
10.1016/j.knosys.2024.112241
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep learning has gained widespread application across diverse fields, including image classification and machine translation. Nevertheless, the emergence of adversarial examples has revealed a vulnerability of deep learning techniques to potential attacks. Despite the introduction of diverse adversarial attack methods, they are still constrained by certain limitations. Specifically, global perturbations are easily discernible by humans, resulting in poor imperceptibility. Additionally, current methods encounter limited transferability due to their reliance on attacking specific models. To address these challenges, this paper proposed a spatial-frequency gradient fusion based model augmentation for adversarial attack. First, we utilize a Gaussian convolution kernel to pinpoint regions in images that exhibit significant pixel variation, aiming to generate locally imperceptible perturbations undetectable by humans. These areas, which we consider as complex texture regions, are ideal for adding perturbations. Then, we design a perceptual similarity constraint to regulate the generation of perturbations in smooth texture regions. Subsequently, to further enhance the transferability of our method, we propose a spatial-frequency gradient fusion based model augmentation, applying random spectral transformation to shift into the frequency domain for narrowing the differences between models. Additionally, we design complex region scaling transformations in the spatial domain, aimed at capturing common features shared across models. Finally, we integrate the gradients from both the spatial and frequency domains, leveraging the strengths of both to empower attack models in effectively simulating the target model. Extensive experiments conducted on ImageNet and CIFAR-10 datasets have shown that our method attains a remarkable black-box attack success rate of up to 93.1%, with a perceptual loss reduction of approximately 8.39%, while also exhibiting stronger robustness.
引用
收藏
页数:10
相关论文
共 47 条
[1]   On the Robustness of Semantic Segmentation Models to Adversarial Attacks [J].
Arnab, Anurag ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :888-897
[2]   Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms [J].
Bhagoji, Arjun Nitin ;
He, Warren ;
Li, Bo ;
Song, Dawn .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :158-174
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[5]   Visformer: The Vision-friendly Transformer [J].
Chen, Zhengsu ;
Xie, Lingxi ;
Niu, Jianwei ;
Liu, Xuefeng ;
Wei, Longhui ;
Tian, Qi .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :569-578
[6]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[7]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[8]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[9]   Patch-Wise Attack for Fooling Deep Neural Network [J].
Gao, Lianli ;
Zhang, Qilong ;
Song, Jingkuan ;
Liu, Xianglong ;
Shen, Heng Tao .
COMPUTER VISION - ECCV 2020, PT XXVIII, 2020, 12373 :307-322
[10]  
Guo C., 2017, arXiv