Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2022年 / 6卷
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [31] Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity
    Zhang, Chongzhi
    Liu, Aishan
    Liu, Xianglong
    Xu, Yitao
    Yu, Hang
    Ma, Yuqing
    Li, Tianlin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 1291 - 1304
  • [32] Comparison of the Resilience of Convolutional and Cellular Neural Networks Against Adversarial Attacks
    Horvath, Andras
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2348 - 2352
  • [33] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [34] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [35] Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
    Zheng, Jiamin
    Zhang, Yaoyuan
    Li, Yuanzhang
    Wu, Shangbo
    Yu, Xiao
    CHINESE JOURNAL OF ELECTRONICS, 2023, 32 (01) : 151 - 158
  • [36] On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
    Roy, Deboleena
    Chakraborty, Indranil
    Ibrayev, Timur
    Roy, Kaushik
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 565 - 570
  • [37] Increasing Robustness against Adversarial Attacks through Ensemble of Approximate Multipliers
    Atoofian, Ehsan
    2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2022, : 148 - 155
  • [38] Analyzing the Noise Robustness of Deep Neural Networks
    Cao, Kelei
    Liu, Mengchen
    Su, Hang
    Wu, Jing
    Zhu, Jun
    Liu, Shixia
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (07) : 3289 - 3304
  • [39] SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
    Marchisio, Alberto
    Nanfa, Giorgio
    Khalid, Faiq
    Hanif, Muhammad Abdullah
    Martina, Maurizio
    Shafique, Muhammad
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 96
  • [40] DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
    Bakhti, Yassine
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    IEEE ACCESS, 2019, 7 : 160397 - 160407