Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2022年 / 6卷
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [41] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394
  • [42] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [43] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [44] Vulnerable point detection and repair against adversarial attacks for convolutional neural networks
    Jie Gao
    Zhaoqiang Xia
    Jing Dai
    Chen Dang
    Xiaoyue Jiang
    Xiaoyi Feng
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 4163 - 4192
  • [45] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Insaf Kraidia
    Afifa Ghenai
    Samir Brahim Belhaouari
    Scientific Reports, 14
  • [46] Vulnerable point detection and repair against adversarial attacks for convolutional neural networks
    Gao, Jie
    Xia, Zhaoqiang
    Dai, Jing
    Dang, Chen
    Jiang, Xiaoyue
    Feng, Xiaoyi
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (12) : 4163 - 4192
  • [47] Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning
    Akers, Matthew
    Barton, Armon
    COMPUTER, 2024, 57 (01) : 88 - 99
  • [48] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Kraidia, Insaf
    Ghenai, Afifa
    Belhaouari, Samir Brahim
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [49] Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System
    Santos, Pablo Millan
    Manoj, B. R.
    Sadeghi, Meysam
    Larsson, Erik G.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (01) : 67 - 71
  • [50] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    ALGORITHMS, 2024, 17 (04)