Multi-path Back-propagation Method for Neural Network Verification

被引:0
|
作者
Zheng Y. [1 ]
Shi X.-M. [1 ]
Liu J.-X. [1 ]
机构
[1] College of Computer Science and Software Engineering, Shenzhen University, Shenzhen
来源
Ruan Jian Xue Bao/Journal of Software | 2022年 / 33卷 / 07期
关键词
abstract interpretation; multi-path back-propagation; neural network verification; symbolic propagation;
D O I
10.13328/j.cnki.jos.006585
中图分类号
学科分类号
摘要
Symbolic propagation methods based on linear abstraction play a significant role in neural network verification. This study proposes the notion of multi-path back-propagation for these methods. Existing methods are viewed as using only a single back-propagation path to calculate the upper and lower bounds of each node in a given neural network, being specific instances of the proposed notion. Leveraging multiple back-propagation paths effectively improves the accuracy of this kind of method. For evaluation, the proposed method is quantitatively compared using multiple back-propagation paths with the state-of-the-art tool DeepPoly on benchmarks ACAS Xu, MNIST, and CIFAR10. The experiment results show that the proposed method achieves significant accuracy improvement while introducing only a low extra time cost. In addition, the multi-path back-propagation method is compared with the Optimized LiRPA based on global optimization, on the dataset MNIST. The results show that the proposed method still has an accuracy advantage. © 2022 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:2464 / 2481
页数:17
相关论文
共 35 条
  • [1] Dong YP, Su H, Zhu J., Towards interpretable deep neural networks by leveraging adversarial examples, Acta Automatica Sinica, 48, 1, (2020)
  • [2] Huang X, Kwiatkowska M, Wang S, Wu M., Safety verification of deep neural networks, Proc. of the 29th Int’l Conf. on Computer Aided Verification, pp. 3-29, (2017)
  • [3] Wang Z, Yan M, Liu S, Chen JJ, Zhang DD, Wu Z, Chen X., Survey on testing of deep neural networks, Ruan Jian Xue Bao/Journal of Software, 31, 5, pp. 1255-1275, (2020)
  • [4] Liu C, Arnon T, Lazarus C, Strong C, Barrett C, Kochenderfer MJ., Algorithms for verifying deep neural networks, Foundations and Trends® in Optimization, 4, 3-4, (2021)
  • [5] Li L, Qi X, Xie T, Li B., SoK: Certified robustness for deep neural networks, (2020)
  • [6] Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X., A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability, Computer Science Review, 37, (2020)
  • [7] Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ., Reluplex: An efficient SMT solver for verifying deep neural networks, Proc. of the 29th Int’l Conf. on Computer Aided Verification, pp. 97-117, (2017)
  • [8] Singh G, Gehr T, Mirman M, Puschel M, Vechev M., Fast and effective robustness certification, Advances in Neural Information Processing Systems 31: Annual Conf. on Neural Information Processing Systems, pp. 10825-10836, (2018)
  • [9] Tran HD, Manzanas Lopez D, Musau P, Yang X, Nguyen LV, Xiang W, Johnson TT., Star-based reachability analysis for deep neural networks, Proc. of the 23rd Int’l Symp. on Formal Methods (FM 2019), pp. 670-686, (2019)
  • [10] Raghunathan A, Steinhardt J, Liang P., Semidefinite relaxations for certifying robustness to adversarial examples, Advances in Neural Information Processing Systems 31: Annual Conf. on Neural Information Processing Systems, (2018)