Improved Incremental Verification for Neural Networks

被引:0
|
作者
Tang, Xuezhou [1 ]
机构
[1] Shenzhen Univ, Shenzhen, Peoples R China
来源
THEORETICAL ASPECTS OF SOFTWARE ENGINEERING, TASE 2024 | 2024年 / 14777卷
关键词
Neural network verification; Incremental verification; Branch-and-Bound;
D O I
10.1007/978-3-031-64626-3_23
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The formal verification of deep neural networks (DNNs) guarantees their robustness. However, DNNs deployed in real-world applications frequently undergo adjustments due to, for instance, quantization and model repair, necessitating the repetition of computationally expensive formal verification. To efficiently verify the robustness of such adjusted DNNs, incremental techniques for DNN verification are proposed recently. These techniques use the information obtained from the verification of original networks to expedite the verification of their adjusted counterparts. In particular, the state-of-the-art incremental technique based on the Branch-and-Bound method exploits branching information from verifying original DNNs, to efficiently generate subproblems for verifying the adjusted counterparts. This paper goes beyond this idea. When verifying adjusted DNNs, we prioritize checking subproblems that falsify the robustness of the original ones, with the expectation of prompt falsification. Furthermore, we collect information from the Bound processes while verifying original DNNs, then utilize it for more efficient Bound processes when verifying the adjusted networks. We propose a DNN incremental verification framework I-IVAN and realize it for evaluation. It is compared against IVAN, the state-of-the-art DNN verification tool with incremental techniques, on networks trained by datasets MNIST and CIFAR-10. The experimental results show that I-IVAN is much more efficient than IVAN within 7.71 times faster than IVAN at most.
引用
收藏
页码:392 / 409
页数:18
相关论文
共 50 条
  • [31] Adaptive incremental learning in neural networks Preface
    Bouchachia, Abdelhamid
    Nedjah, Nadia
    NEUROCOMPUTING, 2011, 74 (11) : 1783 - 1784
  • [32] Generative Incremental Dependency Parsing with Neural Networks
    Buys, Jan
    Blunsom, Phil
    PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL) AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (IJCNLP), VOL 2, 2015, : 863 - 869
  • [33] Convergence analysis of convex incremental neural networks
    Lei Chen
    Hung Keng Pung
    Annals of Mathematics and Artificial Intelligence, 2008, 52 : 67 - 80
  • [34] Convergence analysis of convex incremental neural networks
    Chen, Lei
    Pung, Hung Keng
    ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2008, 52 (01) : 67 - 80
  • [35] Recurrent Neural Networks for Incremental Disfluency Detection
    Hough, Julian
    Schlangen, David
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 849 - 853
  • [36] Neural networks for improved tracking
    Perlovsky, Leonid I.
    Deming, Ross W.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2007, 18 (06): : 1854 - 1857
  • [37] Efficient Incremental Training for Deep Convolutional Neural Networks
    Tao, Yudong
    Tu, Yuexuan
    Shyu, Mei-Ling
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 286 - 291
  • [38] Incremental evolution of trainable neural networks that are backwards compatible
    Christenson, C
    Kaikhah, K
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND APPLICATIONS, 2006, : 222 - +
  • [39] A clustering approach to incremental learning for feedforward neural networks
    Engelbrecht, AP
    Brits, R
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 2019 - 2024
  • [40] Incremental Trainable Parameter Selection in Deep Neural Networks
    Thakur, Anshul
    Abrol, Vinayak
    Sharma, Pulkit
    Zhu, Tingting
    Clifton, David A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (05) : 6478 - 6491