Towards Transferable Adversarial Attacks with Centralized Perturbation

被引:0
作者
Wu, Shangbo [1 ]
Tan, Yu-an [1 ]
Wang, Yajie [1 ]
Ma, Ruinan [1 ]
Ma, Wencong [2 ]
Li, Yuanzhang [2 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 | 2024年
基金
中国国家自然科学基金;
关键词
EXAMPLES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial transferability enables black-box attacks on unknown victim deep neural networks (DNNs), rendering attacks viable in real-world scenarios. Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model. Concentrating perturbation to dominant image regions that are model-agnostic is crucial to improving adversarial efficacy. However, limiting perturbation to local regions in the spatial domain proves inadequate in augmenting transferability. To this end, we propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation. We devise a systematic pipeline to dynamically constrain perturbation optimization to dominant frequency coefficients. The constraint is optimized in parallel at each iteration, ensuring the directional alignment of perturbation optimization with model prediction. Our approach allows us to centralize perturbation towards sample-specific important frequency features, which are shared by DNNs, effectively mitigating source model overfitting. Experiments demonstrate that by dynamically centralizing perturbation on dominating frequency coefficients, crafted adversarial examples exhibit stronger transferability, and allowing them to bypass various defenses.
引用
收藏
页码:6109 / 6116
页数:8
相关论文
共 32 条
  • [31] Trust Region Based Adversarial Attack on Neural Networks
    Yao, Zhewei
    Gholami, Amir
    Xu, Peng
    Keutzer, Kurt
    Mahoney, Michael W.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11342 - 11351
  • [32] Improving the invisibility of adversarial examples with perceptually adaptive perturbation
    Zhang, Yaoyuan
    Tan, Yu-an
    Sun, Haipeng
    Zhao, Yuhang
    Zhang, Quanxing
    Li, Yuanzhang
    [J]. INFORMATION SCIENCES, 2023, 635 : 126 - 137