Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:3
|
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2022年 / 6卷
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 50 条
  • [21] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [22] Enhancing Robustness Against Adversarial Attacks in Multimodal Emotion Recognition With Spiking Transformers
    Chen, Guoming
    Qian, Zhuoxian
    Zhang, Dong
    Qiu, Shuang
    Zhou, Ruqi
    IEEE ACCESS, 2025, 13 : 34584 - 34597
  • [23] SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks
    Gufran, Danish
    Anandathirtha, Pooja
    Pasricha, Sudeep
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 4021 - 4032
  • [24] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [25] A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
    Li, Deqiang
    Li, Qianmu
    Ye, Yanfang
    Xu, Shouhuai
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (01): : 736 - 750
  • [26] A Method to Verify Neural Network Decoders Against Adversarial Attacks
    Shen, Kaijie
    Li, Chengju
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (04) : 843 - 847
  • [27] Robustness of Generative Adversarial CLIPs Against Single-Character Adversarial Attacks in Text-to-Image Generation
    Chanakya, Patibandla
    Harsha, Putla
    Pratap Singh, Krishna
    IEEE ACCESS, 2024, 12 : 162551 - 162563
  • [28] Securing Networks Against Adversarial Domain Name System Tunneling Attacks Using Hybrid Neural Networks
    Ness, Stephanie
    IEEE Access, 2025, 13 : 46697 - 46709
  • [29] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [30] Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks
    Liu, Qi
    Wen, Wujie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 3 - 14