Boosting the Transferability of Adversarial Attacks With Frequency-Aware Perturbation

被引:4
作者
Wang, Yajie [1 ]
Wu, Yi [2 ,3 ]
Wu, Shangbo [1 ]
Liu, Ximeng [4 ]
Zhou, Wanlei [5 ]
Zhu, Liehuang [1 ]
Zhang, Chuan [1 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing 100081, Peoples R China
[2] China Acad Informat & Commun Technol, Beijing, Peoples R China
[3] Minist Ind & Informat Technol, Key Lab Mobile Applicat Innovat & Governance Techn, Beijing 100191, Peoples R China
[4] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350116, Peoples R China
[5] City Univ Macau, Fac Data Sci, Macau, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Adversarial attack; adversarial example; transferability; deep neural networks;
D O I
10.1109/TIFS.2024.3411921
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses.
引用
收藏
页码:6293 / 6304
页数:12
相关论文
共 42 条
[1]  
Croce F, 2020, PR MACH LEARN RES, V119
[2]   Frequency-Tuned Universal Adversarial Attacks on Texture Recognition [J].
Deng, Yingpeng ;
Karam, Lina J. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :5856-5868
[3]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[4]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[5]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[6]   AdvDrop: Adversarial Attack to DNNs by Dropping Information [J].
Duan, Ranjie ;
Chen, Yuefeng ;
Niu, Dantong ;
Yang, Yun ;
Qin, A. K. ;
He, Yuan .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :7486-7495
[7]  
Google B., 2017, NIPS 2017: Adversarial Learning Development Set
[8]  
Guo C, 2019, PR MACH LEARN RES, V115, P1127
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269