Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

被引:183
|
作者
Su, Dong [1 ]
Zhang, Huan [2 ]
Chen, Hongge [3 ]
Yi, Jinfeng [4 ]
Chen, Pin-Yu [1 ]
Gao, Yupeng [1 ]
机构
[1] IBM Res, New York, NY 10598 USA
[2] Univ Calif Davis, Davis, CA 95616 USA
[3] MIT, Cambridge, MA 02139 USA
[4] JD AI Res, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2018, PT XII | 2018年 / 11216卷
关键词
Deep neural networks; Adversarial attacks; Robustness;
D O I
10.1007/978-3-030-01258-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical l(2) and l(infinity) distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in l(infinity) distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at https://github.com/huanzhang12/Adversarial_Survey.
引用
收藏
页码:644 / 661
页数:18
相关论文
共 50 条
  • [1] Impact of Attention on Adversarial Robustness of Image Classification Models
    Agrawal, Prachi
    Punn, Narinder Singh
    Sonbhadra, Sanjay Kumar
    Agarwal, Sonali
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 3013 - 3019
  • [2] Robustness of models addressing Information Disorder: A comprehensive review and benchmarking study
    Fenza, Giuseppe
    Loia, Vincenzo
    Stanzione, Claudio
    Di Gisi, Maria
    NEUROCOMPUTING, 2024, 596
  • [3] DeepAdversaries: examining the robustness of deep learning models for galaxy morphology classification
    Ciprijanovic, Aleksandra
    Kafkes, Diana
    Snyder, Gregory
    Sanchez, F. Javier
    Perdue, Gabriel Nathan
    Pedro, Kevin
    Nord, Brian
    Madireddy, Sandeep
    Wild, Stefan M.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2022, 3 (03):
  • [4] Robustness Analysis for Deep Learning-Based Image Reconstruction Models
    Ayna, Cemre Omer
    Gurbuz, Ali Cafer
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1428 - 1432
  • [5] A survey on robustness attacks for deep code models
    Qu, Yubin
    Huang, Song
    Yao, Yongming
    AUTOMATED SOFTWARE ENGINEERING, 2024, 31 (02)
  • [6] Adversarial robustness and attacks for multi-view deep models
    Sun, Xuli
    Sun, Shiliang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 97 (97)
  • [7] Toward a Better Tradeoff Between Accuracy and Robustness for Image Classification via Adversarial Feature Diversity
    Xue, Wei
    Wang, Yonghao
    Wang, Yuchi
    Wang, Yue
    Du, Mingyang
    Zheng, Xiao
    IEEE JOURNAL ON MINIATURIZATION FOR AIR AND SPACE SYSTEMS, 2024, 5 (04): : 254 - 264
  • [8] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    ELECTRONICS, 2024, 13 (03)
  • [9] Dealing with Robustness of Convolutional Neural Networks for Image Classification
    Arcaini, Paolo
    Bombarda, Andrea
    Bonfanti, Silvia
    Gargantini, Angelo
    2020 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST), 2020, : 7 - 14
  • [10] Adversarial Robustness on Image Classification With k-Means
    Omari, Rollin
    Kim, Junae
    Montague, Paul
    IEEE ACCESS, 2024, 12 : 28853 - 28859