Abstract Interpretation of ReLU Neural Networks with Optimizable Polynomial Relaxations

被引:0
作者
Kern, Philipp [1 ]
Sinz, Carsten [1 ,2 ]
机构
[1] Karlsruhe Inst Technol, Inst Theoret Comp Sci, Fasanengarten 5, D-76131 Karlsruhe, Germany
[2] Karlsruhe Univ Appl Sci, Moltkestr 30, D-76133 Karlsruhe, Germany
来源
STATIC ANALYSIS, SAS 2024 | 2025年 / 14995卷
关键词
Neural Network Verification; Abstract Interpretation; Polynomial Overapproximation; VERIFICATION;
D O I
10.1007/978-3-031-74776-2_7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Neural networks have been shown to be highly successful in a wide range of applications. However, due to their black box behavior, their applicability can be restricted in safety-critical environments and additional verification techniques are required. Many state-of-the-art verification approaches use abstract interpretation based on linear overapproximation of the activation functions. Linearly approximating nonlinear activation functions clearly incurs a loss of precision. One way to overcome this limitation is the utilization of polynomial approximations. A second way shown to improve the obtained bounds is to optimize the slope of the linear relaxations. Combining these insights, we propose a method to enable similar parameter optimization for polynomial relaxations. Given arbitrary values for a polynomial's monomial coefficients, we can obtain valid polynomial overapproximations by appropriate upward or downward shifts. Since any value of monomial coefficients can be used to obtain valid overapproximations in that way, we use gradient-based methods to optimize the choice of the monomial coefficients. Our evaluation on verifying robustness against adversarial patches on the MNIST and CIFAR10 benchmarks shows that we can verify more instances and achieve tighter bounds than state of the art bound propagation methods.
引用
收藏
页码:173 / 193
页数:21
相关论文
共 30 条
[1]  
Bak S, 2021, Arxiv, DOI arXiv:2109.00498
[2]  
Blondel M., 2022, NEURIPS
[3]  
Brix C, 2023, Arxiv, DOI [arXiv:2312.16760, 10.48550/arXiv.2312.16760]
[4]   First three years of the international verification of neural networks competition (VNN-COMP) [J].
Brix, Christopher ;
Mueller, Mark Niklas ;
Bak, Stanley ;
Johnson, Taylor T. ;
Liu, Changliu .
INTERNATIONAL JOURNAL ON SOFTWARE TOOLS FOR TECHNOLOGY TRANSFER, 2023, 25 (03) :329-339
[5]   Maximum Resilience of Artificial Neural Networks [J].
Cheng, Chih-Hong ;
Nuehrenberg, Georg ;
Ruess, Harald .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :251-268
[6]  
Fatnassi W, 2022, Arxiv, DOI [arXiv:2211.14438, 10.48550/arXiv.2211.14438, DOI 10.48550/ARXIV.2211.14438]
[7]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[8]   Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search [J].
Henriksen, P. ;
Lomuscio, A. .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :2513-2520
[9]   POLAR: A Polynomial Arithmetic Framework for Verifying Neural-Network Controlled Systems [J].
Huang, Chao ;
Fan, Jiameng ;
Chen, Xin ;
Li, Wenchao ;
Zhu, Qi .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS, ATVA 2022, 2022, 13505 :414-430
[10]   Verisig 2.0: Verification of Neural Network Controllers Using Taylor Model Preconditioning [J].
Ivanov, Radoslav ;
Carpenter, Taylor ;
Weimer, James ;
Alur, Rajeev ;
Pappas, George ;
Lee, Insup .
COMPUTER AIDED VERIFICATION (CAV 2021), PT I, 2021, 12759 :249-262