On the Relationship between Generalization and Robustness to Adversarial Examples

被引:8
|
作者
Pedraza, Anibal [1 ]
Deniz, Oscar [1 ]
Bueno, Gloria [1 ]
机构
[1] Univ Castilla La Mancha, VISILAB, ETSII, Ciudad Real 13071, Spain
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 05期
关键词
machine learning; computer vision; deep learning; adversarial examples; adversarial robustness; overfitting;
D O I
10.3390/sym13050817
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Really natural adversarial examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (04) : 1065 - 1077
  • [42] Adversarial robustness via noise injection in smoothed models
    Nemcovsky, Yaniv
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Chmiel, Brian
    Bronstein, Alex M.
    Mendelson, Avi
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9483 - 9498
  • [43] Adversarial robustness via noise injection in smoothed models
    Yaniv Nemcovsky
    Evgenii Zheltonozhskii
    Chaim Baskin
    Brian Chmiel
    Alex M. Bronstein
    Avi Mendelson
    Applied Intelligence, 2023, 53 : 9483 - 9498
  • [44] Adversarial Robustness on Image Classification With k-Means
    Omari, Rollin
    Kim, Junae
    Montague, Paul
    IEEE ACCESS, 2024, 12 : 28853 - 28859
  • [45] Restoration of Adversarial Examples Using Image Arithmetic Operations
    Ali, Kazim
    Quershi, Adnan N.
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2022, 32 (01) : 271 - 284
  • [46] Pareto adversarial robustness: balancing spatial robustness and sensitivity-based robustness
    Ke Sun
    Mingjie Li
    Zhouchen Lin
    Science China Information Sciences, 2025, 68 (6)
  • [47] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4
  • [48] ADVERSARIAL EXAMPLES FOR GOOD: ADVERSARIAL EXAMPLES GUIDED IMBALANCED LEARNING
    Zhang, Jie
    Zhang, Lei
    Li, Gang
    Wu, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 136 - 140
  • [49] Global Wasserstein Margin maximization for boosting generalization in adversarial training
    Yu, Tingyue
    Wang, Shen
    Yu, Xiangzhan
    APPLIED INTELLIGENCE, 2023, 53 (10) : 11490 - 11504
  • [50] It Is All about Data: A Survey on the Effects of Data on Adversarial Robustness
    Xiong, Peiyu
    Tegegn, Michael
    Sarin, Jaskeerat Singh
    Pal, Shubhraneel
    Rubin, Julia
    ACM COMPUTING SURVEYS, 2024, 56 (07)