Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

被引:10
作者
Byun, Junyoung [1 ]
Kwon, Myung-Joon [1 ]
Cho, Seungju [1 ]
Kim, Yoonji [1 ]
Kim, Changick [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.02361
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are widely known to be susceptible to adversarial examples, which can cause incorrect predictions through subtle input modifications. These adversarial examples tend to be transferable between models, but targeted attacks still have lower attack success rates due to significant variations in decision boundaries. To enhance the transferability of targeted adversarial examples, we propose introducing competition into the optimization process. Our idea is to craft adversarial perturbations in the presence of two new types of competitor noises: adversarial perturbations towards different target classes and friendly perturbations towards the correct class. With these competitors, even if an adversarial example deceives a network to extract specific features leading to the target class, this disturbance can be suppressed by other competitors. Therefore, within this competition, adversarial examples should take different attack strategies by leveraging more diverse features to overwhelm their interference, leading to improving their transferability to different models. Considering the computational complexity, we efficiently simulate various interference from these two types of competitors in feature space by randomly mixing up stored clean features in the model inference and named this method Clean Feature Mixup (CFM). Our extensive experimental results on the ImageNet-Compatible and CIFAR-10 datasets show that the proposed method outperforms the existing baselines with a clear margin. Our code is available at https://github.com/dreamflake/CFM.
引用
收藏
页码:24648 / 24657
页数:10
相关论文
共 38 条
[1]  
[Anonymous], 2021, PROC INT C MACH LEAR
[2]   Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input [J].
Byun, Junyoung ;
Cho, Seungju ;
Kwon, Myung-Joon ;
Kim, Hee-Seon ;
Kim, Changick .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :15223-15232
[3]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[4]  
Chu XX, 2021, ADV NEUR IN
[5]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[6]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[7]  
Dosovitskiy Alexey., 2021, PROC INT C LEARN REP, P2021
[8]   Formation mechanism of isoprene compounds degraded from carotenoids during fermentation of goji wine [J].
Geng, Jiayu ;
Zhao, Lu ;
Zhang, Huiling .
FOOD QUALITY AND SAFETY, 2021, 5 :1-9
[9]  
Goodfellow IanJ., 2015, CORR ABS14126572
[10]   LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference [J].
Graham, Ben ;
El-Nouby, Alaaeldin ;
Touvron, Hugo ;
Stock, Pierre ;
Joulin, Armand ;
Jegou, Herve ;
Douze, Matthijs .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12239-12249