Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

被引:62
作者
Anderson, Greg [1 ]
Pailoor, Shankara [1 ]
Dillig, Isil [1 ]
Chaudhuri, Swarat [2 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Rice Univ, Houston, TX USA
来源
PROCEEDINGS OF THE 40TH ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '19) | 2019年
基金
美国国家科学基金会;
关键词
Machine learning; Abstract Interpretation; Optimization; Robustness; STRATEGY;
D O I
10.1145/3314221.3314614
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and (d-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI(2), Reluplex, and Reluval.
引用
收藏
页码:731 / 744
页数:14
相关论文
共 55 条
[1]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[2]  
[Anonymous], 2016, ARXIV PREPRINT ARXIV
[3]  
[Anonymous], AB160407316 CORR
[4]  
[Anonymous], 1994, TECHNICAL REPORT
[5]  
[Anonymous], 2017, CORR
[6]  
[Anonymous], 2017, ABS170708945 CORR
[7]  
[Anonymous], 2016, CoRR
[8]  
[Anonymous], 2010, COMPUT SCI
[9]  
[Anonymous], CORR
[10]  
[Anonymous], 2017, CAV 17