Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [41] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    PATTERN RECOGNITION, 2022, 132
  • [42] Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
    Luo, Bo
    Liu, Yannan
    Wei, Lingxiao
    Xu, Qiang
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1652 - 1659
  • [43] On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
    Roy, Deboleena
    Chakraborty, Indranil
    Ibrayev, Timur
    Roy, Kaushik
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 565 - 570
  • [44] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173
  • [45] Enhancing EEG Signal Classifier Robustness Against Adversarial Attacks Using a Generative Adversarial Network Approach
    Aissa N.E.H.S.B.
    Kerrache C.A.
    Korichi A.
    Lakas A.
    Belkacem A.N.
    IEEE Internet of Things Magazine, 2024, 7 (03): : 44 - 49
  • [46] Enhancing Model Robustness Against Adversarial Attacks with an Anti-adversarial Module
    Qin, Zhiquan
    Liu, Guoxing
    Lin, Xianming
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 66 - 78
  • [47] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    ALGORITHMS, 2024, 17 (04)
  • [48] Adversarial attacks against dynamic graph neural networks via node injection
    Jiang, Yanan
    Xia, Hui
    HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):
  • [49] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [50] SAM: Query-efficient Adversarial Attacks against Graph Neural Networks
    Zhang, Chenhan
    Zhang, Shiyao
    Yu, James J. Q.
    Yu, Shui
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)