Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [31] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [32] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [33] Comparison of the Resilience of Convolutional and Cellular Neural Networks Against Adversarial Attacks
    Horvath, Andras
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2348 - 2352
  • [34] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [35] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [36] Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks
    Wang, Haibo
    Zhou, Chuan
    Chen, Xin
    Wu, Jia
    Pan, Shirui
    Li, Zhao
    Wang, Jilong
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6344 - 6357
  • [37] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [38] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [39] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [40] Centered-Ranking Learning Against Adversarial Attacks in Neural Networks
    Appiah, Benjamin
    Adu, Adolph S. Y.
    Osei, Isaac
    Assamah, Gabriel
    Hammond, Ebenezer N. A.
    International Journal of Network Security, 2023, 25 (05) : 814 - 820