Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications

被引:0
作者
Berrada, Leonard [1 ]
Dathathri, Sumanth [1 ]
Dvijotham, Krishnamurthy [1 ]
Stanforth, Robert [1 ]
Bunel, Rudy [1 ]
Uesato, Jonathan [1 ]
Gowal, Sven [1 ]
Kumar, M. Pawan [1 ]
机构
[1] DeepMind, London, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most real world applications require dealing with stochasticity like sensor noise or predictive uncertainty, where formal specifications of desired behavior are inherently probabilistic. Despite the promise of formal verification in ensuring the reliability of neural networks, progress in the direction of probabilistic specifications has been limited. In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e.g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations). We then propose a general technique to verify such specifications by generalizing the notion of Lagrangian duality, replacing standard Lagrangian multipliers with "functional multipliers" that can be arbitrary functions of the activations at a given layer. We show that an optimal choice of functional multipliers leads to exact verification (i.e., sound and complete verification), and for specific forms of multipliers, we develop tractable practical verification algorithms. We empirically validate our algorithms by applying them to Bayesian Neural Networks (BNNs) and MC Dropout Networks, and certifying properties such as adversarial robustness and robust detection of out-of-distribution (OOD) data. On these tasks we are able to provide significantly stronger guarantees when compared to prior work - for instance, for a VGG-64 MC-Dropout CNN trained on CIFAR-10 in a verification-agnostic manner, we improve the certified AUC (a verified lower bound on the true AUC) for robust OOD detection (on CIFAR-100) from 0% -> 29%. Similarly, for a BNN trained on MNIST, we improve on the l(infinity) robust accuracy from 60:2% -> 74:6%. Further, on a novel specification - distributionally robust OOD detection - we improve on the certified AUC from 5% -> 23%.
引用
收藏
页数:12
相关论文
共 34 条
[1]  
Andriushchenko Maksym, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12368), P484, DOI 10.1007/978-3-030-58592-1_29
[2]  
Bertsekas D., 2015, Convex Optimization Algorithms
[3]  
Bitterwolf Julian, 2020, Advances in Neural Information Processing Systems
[4]  
Bunel Rudy, 2020, ADV NEURAL INFORM PR
[5]  
Cardelli Luca, 2019, INT JOINT C ART INT
[6]  
Dathathri Sumanth, 2020, ADV NEURAL INFORM PR
[7]  
Dvijotham Krishnamurthy, 2018, NEURIPS 2018 WORKSH
[8]   Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks [J].
Ehlers, Ruediger .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :269-286
[9]  
Fazlyab Mahyar, 2019, C DEC CONTR CDC
[10]  
Feizollahi Mohammad Javad, 2017, MATH PROGRAMMING