Adversarial Robustness with Partial Isometry

被引:2
作者
Shi-Garrier, Loic [1 ]
Bouaynaya, Nidhal Carla [2 ]
Delahaye, Daniel [1 ]
机构
[1] Univ Toulouse, ENAC, F-31400 Toulouse, France
[2] Rowan Univ, Dept Elect & Comp Engn, Glassboro, NJ 08028 USA
关键词
adversarial robustness; information geometry; fisher information metric; multi-class classification;
D O I
10.3390/e26020103
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Despite their remarkable performance, deep learning models still lack robustness guarantees, particularly in the presence of adversarial examples. This significant vulnerability raises concerns about their trustworthiness and hinders their deployment in critical domains that require certified levels of robustness. In this paper, we introduce an information geometric framework to establish precise robustness criteria for l2 white-box attacks in a multi-class classification setting. We endow the output space with the Fisher information metric and derive criteria on the input-output Jacobian to ensure robustness. We show that model robustness can be achieved by constraining the model to be partially isometric around the training points. We evaluate our approach using MNIST and CIFAR-10 datasets against adversarial attacks, revealing its substantial improvements over defensive distillation and Jacobian regularization for medium-sized perturbations and its superior robustness performance to adversarial training for large perturbations, all while maintaining the desired accuracy.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Semantically Consistent Visual Representation for Adversarial Robustness
    Kuang, Huafeng
    Liu, Hong
    Wu, Yongjian
    Ji, Rongrong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5608 - 5622
  • [42] ACHIEVING ADVERSARIAL ROBUSTNESS REQUIRES AN ACTIVE TEACHER
    Ma, Chao
    Ying, Lexing
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2021, 39 (06): : 880 - 896
  • [43] Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks
    Ma, Linhai
    Liang, Liang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 240
  • [44] Enhancing Adversarial Robustness via Stochastic Robust Framework
    Sun, Zhenjiang
    Li, Yuanbo
    Hu, Cong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 187 - 198
  • [45] Understanding adversarial robustness via critical attacking route
    Li, Tianlin
    Liu, Aishan
    Liu, Xianglong
    Xu, Yitao
    Zhang, Chongzhi
    Xie, Xiaofei
    INFORMATION SCIENCES, 2021, 547 : 568 - 578
  • [46] Adversarial Robustness of Similarity-Based Link Prediction
    Zhou, Kai
    Michalak, Tomasz P.
    Vorobeychik, Yevgeniy
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 926 - 935
  • [47] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [48] Adversarial Robustness Via Fisher-Rao Regularization
    Picot, Marine
    Messina, Francisco
    Boudiaf, Malik
    Labeau, Fabrice
    Ayed, Ismail Ben
    Piantanida, Pablo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 2698 - 2710
  • [49] Enhancing Adversarial Robustness for SVM Based on Lagrange Duality
    Liu, Yuting
    Gu, Hong
    Qin, Pan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 65 - 68
  • [50] Towards Demystifying Adversarial Robustness of Binarized Neural Networks
    Qin, Zihao
    Lin, Hsiao-Ying
    Shi, Jie
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2021, 2021, 12809 : 439 - 462