A domain-theoretic framework for robustness analysis of neural networks

被引:1
作者
Zhou, Can [1 ]
Shaikh, Razin A. [1 ,2 ]
Li, Yiran [3 ]
Farjudian, Amin [3 ]
机构
[1] Univ Oxford, Dept Comp Sci, Oxford, England
[2] Quantinuum Ltd, Oxford, England
[3] Univ Nottingham Ningbo China, Sch Comp Sci, Ningbo, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain theory; neural network; robustness; Lipschitz constant; Clarke-gradient; QUERY-DRIVEN COMMUNICATION; REAL; SEMANTICS; COMPUTABILITY; COMPUTATION; SPACES;
D O I
10.1017/S0960129523000142
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat's domain-theoretic L-derivative coincides with Clarke's generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks and also over general position ReLU networks. Computability results are obtained within the framework of effectively given domains. Using the proposed domain model, differentiable and non-differentiable networks can be analyzed uniformly. The validated algorithm is implemented using arbitrary-precision interval arithmetic, and the results of some experiments are presented. The software implementation is truly validated, as it handles floating-point errors as well.
引用
收藏
页码:68 / 105
页数:38
相关论文
共 90 条
[1]  
Abramsky S., 1990, Journal of Logic and Computation, V1, P5, DOI 10.1093/logcom/1.1.5
[2]  
Abramsky S., 1994, HDB LOGIC COMPUTER S
[3]  
Albiac F, 2006, GRAD TEXTS MATH, V233, P1
[4]  
Araujo A., 2021, AAAI
[5]   Benign overfitting in linear regression [J].
Bartlett, Peter L. ;
Long, Philip M. ;
Lugosi, Gabor ;
Tsigler, Alexander .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) :30063-30070
[6]   Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation [J].
Belkin, Mikhail .
ACTA NUMERICA, 2021, 30 :203-248
[7]   LipBaB: Computing Exact Lipschitz Constant of ReLU Networks [J].
Bhowmick, Aritra ;
D'Souza, Meenakshi ;
Raghavan, G. Srinivasa .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 :151-162
[8]   STRICTNESS ANALYSIS FOR HIGHER-ORDER FUNCTIONS [J].
BURN, GL ;
HANKIN, C ;
ABRAMSKY, S .
SCIENCE OF COMPUTER PROGRAMMING, 1986, 7 (03) :249-278
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Chaudhuri S, 2012, COMMUN ACM, V55, P107, DOI [10.1145/2240236.2240262, 10.1145/2240230.2240282]