Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity

被引:5
作者
Aquino, Bernardo [1 ]
Rahnama, Arash [2 ]
Seiler, Peter [3 ]
Lin, Lizhen [4 ]
Gupta, Vijay [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46656 USA
[2] Amazon Inc, New York, NY 10001 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Univ Notre Dame, Dept Appl Computat Math & Stat, Notre Dame, IN 46656 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2022年 / 6卷
关键词
Biological neural networks; Robustness; Training; Perturbation methods; Standards; Neurons; Optimization; Adversarial Attacks; Deep Neural Networks; Robust Design; Passivity Theory; Spectral Regularization;
D O I
10.1109/LCSYS.2022.3150719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples can easily degrade the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This letter proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.
引用
收藏
页码:2341 / 2346
页数:6
相关论文
共 41 条
[1]  
Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/2951913.2976746, 10.1145/3022670.2976746]
[2]  
Athalye A, 2018, PR MACH LEARN RES, V80
[3]  
Bao J, 2007, ADV IND CONTROL, P1, DOI 10.1007/978-1-84628-893-7
[4]  
Bennett W. H., 1979, THESIS U MARYLAND CO
[5]   LOSSLESSNESS, FEEDBACK EQUIVALENCE, AND THE GLOBAL STABILIZATION OF DISCRETE-TIME NONLINEAR-SYSTEMS [J].
BYRNES, CI ;
LIN, W .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1994, 39 (01) :83-98
[6]  
Carlini N., 2017, P AISEC, P3
[7]   Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators [J].
Combettes, Patrick L. ;
Pesquet, Jean-Christophe .
SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2020, 2 (02) :529-557
[8]   The MNIST database of handwritten digit images for machine learning research [J].
Deng, Li .
IEEE Signal Processing Magazine, 2012, 29 (06) :141-142
[9]  
Fazlyab M., 2019, ARXIV190604893
[10]   Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming [J].
Fazlyab, Mahyar ;
Morari, Manfred ;
Pappas, George J. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (01) :1-15