Adversarial Robustness with Partial Isometry

被引:2
作者
Shi-Garrier, Loic [1 ]
Bouaynaya, Nidhal Carla [2 ]
Delahaye, Daniel [1 ]
机构
[1] Univ Toulouse, ENAC, F-31400 Toulouse, France
[2] Rowan Univ, Dept Elect & Comp Engn, Glassboro, NJ 08028 USA
关键词
adversarial robustness; information geometry; fisher information metric; multi-class classification;
D O I
10.3390/e26020103
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Despite their remarkable performance, deep learning models still lack robustness guarantees, particularly in the presence of adversarial examples. This significant vulnerability raises concerns about their trustworthiness and hinders their deployment in critical domains that require certified levels of robustness. In this paper, we introduce an information geometric framework to establish precise robustness criteria for l2 white-box attacks in a multi-class classification setting. We endow the output space with the Fisher information metric and derive criteria on the input-output Jacobian to ensure robustness. We show that model robustness can be achieved by constraining the model to be partially isometric around the training points. We evaluate our approach using MNIST and CIFAR-10 datasets against adversarial attacks, revealing its substantial improvements over defensive distillation and Jacobian regularization for medium-sized perturbations and its superior robustness performance to adversarial training for large perturbations, all while maintaining the desired accuracy.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Evaluating Adversarial Robustness with Expected Viable Performance
    McCoppin, Ryan
    Dawson, Colin
    Kennedy, Scan
    Blaha, Leslie M.
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 76 - 82
  • [22] Adversarial Robustness of Sparse Local Lipschitz Predictors
    Muthukumar, Ramchandran
    Sulam, Jeremias
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04): : 920 - 948
  • [23] On the Importance of Backbone to the Adversarial Robustness of Object Detectors
    Li, Xiao
    Chen, Hang
    Hu, Xiaolin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2387 - 2398
  • [24] Adversarial Robustness Certification for Bayesian Neural Networks
    Wicker, Matthew
    Platzer, Andre
    Laurenti, Luca
    Kwiatkowska, Marta
    FORMAL METHODS, PT I, FM 2024, 2025, 14933 : 3 - 28
  • [25] Advancing Deep Metric Learning With Adversarial Robustness
    Singh, Inderjeet
    Kakizaki, Kazuya
    Araki, Toshinori
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [26] On the Relationship between Generalization and Robustness to Adversarial Examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    SYMMETRY-BASEL, 2021, 13 (05):
  • [27] Toward Adversarial Robustness in Unlabeled Target Domains
    Zhang, Jiajin
    Chao, Hanqing
    Yan, Pingkun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1272 - 1284
  • [28] Scaleable input gradient regularization for adversarial robustness
    Finlay, Chris
    Oberman, Adam M.
    MACHINE LEARNING WITH APPLICATIONS, 2021, 3
  • [29] On the Adversarial Robustness of Decision Trees and a Symmetry Defense
    Lindqvist, Blerta
    IEEE ACCESS, 2025, 13 : 16120 - 16132
  • [30] Improving adversarial robustness by learning shared information
    Yu, Xi
    Smedemark-Margulies, Niklas
    Aeron, Shuchin
    Koike-Akino, Toshiaki
    Moulin, Pierre
    Brand, Matthew
    Parsons, Kieran
    Wang, Ye
    PATTERN RECOGNITION, 2023, 134