Exploring misclassifications of robust neural networks to enhance adversarial attacks

被引:0
作者
Leo Schwinn
René Raab
An Nguyen
Dario Zanca
Bjoern Eskofier
机构
[1] Friedirch-Alexander Universität Erlangen Nürnberg,Department Artificial Intelligence in Biomedical Engineering
来源
Applied Intelligence | 2023年 / 53卷
关键词
Adversarial attacks; Deep learning; Computer vision; Robustness;
D O I
暂无
中图分类号
学科分类号
摘要
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it challenging to identify promising approaches. We do an observational study on the classification decisions of 19 different state-of-the-art neural networks trained to be robust against adversarial attacks. This analysis gives a new indication of the limits of the robustness of current models on a common benchmark. In addition, our findings suggest that current untargeted adversarial attacks induce misclassification toward only a limited amount of different classes. Similarly, we find that previous attacks under-explore the perturbation space during optimization. This leads to unsuccessful attacks for samples where the initial gradient direction is not a good approximation of the final adversarial perturbation direction. Additionally, we observe that both over- and under-confidence in model predictions result in an inaccurate assessment of model robustness. Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves their efficiency and success rate compared to prior attacks for all 30 analyzed models.
引用
收藏
页码:19843 / 19859
页数:16
相关论文
共 28 条
[1]  
Hu S(2019)Adversarial examples for automatic speech recognition: attacks and countermeasures IEEE Commun Mag 57 120-126
[2]  
Shang X(2020)Greedy attack and gumbel attack: generating adversarial examples for discrete data J Mach Learn Res JMLR 21 1-36
[3]  
Qin Z(2020)Adversarial attacks and defenses in deep learning Engineering 6 346-360
[4]  
Li M(2021)A bayes-optimal view on adversarial examples J Mach Learn Res JMLR 22 1-28
[5]  
Wang Q(2018)Multi-targeted adversarial example in evasion attack on deep neural network IEEE Access 6 46084-46096
[6]  
Wang C(1983)Optimization by simulated annealing Science 220 671-680
[7]  
Yang P(1988)Genetic algorithms and machine learning Mach Learn 3 95-99
[8]  
Chen J(undefined)undefined undefined undefined undefined-undefined
[9]  
Hsieh C-J(undefined)undefined undefined undefined undefined-undefined
[10]  
Wang J-L(undefined)undefined undefined undefined undefined-undefined