NNTBFV: Simplifying and Verifying Neural Networks Using Testing-Based Formal Verification

被引:2
作者
Liu, Haiyi [1 ]
Liu, Shaoying [1 ]
Xu, Guangquan [2 ]
Liu, Ai [1 ]
Fang, Dingbang [1 ]
机构
[1] Hiroshima Univ, Grad Sch Adv Sci & Engn, Hiroshima 7398527, Japan
[2] Tianjin Univ, Coll Intelligence & Comp, Sch Cybersecur, Tianjin 300072, Peoples R China
关键词
Neural network simplification; formal verification; neural network reliability;
D O I
10.1142/S0218194023500523
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are extensively employed in safety-critical systems. However, these critical systems incorporating neural networks continue to pose risks due to the presence of adversarial examples. Although the security of neural networks can be enhanced by verification, verifying neural networks is an NP-hard problem, making the application of verification algorithms to large-scale neural networks a challenging task. For this reason, we propose NNTBFV, a framework that utilizes the principles of Testing-Based Formal Verification (TBFV) to simplify neural networks and verify the simplified networks. Unlike conventional neural network pruning techniques, this approach is based on specifications, with the goal of deriving approximate execution paths under given preconditions. To mitigate the potential issue of unverifiable conditions due to overly broad preconditions, we also propose a precondition partition method. Empirical evidence shows that as the range of preconditions narrows, the size of the execution paths also reduces accordingly. The execution path generated by NNTBFV is still a neural network, so it can be verified by verification tools. In response to the results from the verification tool, we provide a theoretical method for analysis.We evaluate the effectiveness of NNTBFV on the ACAS Xu model project, choosing Verification-based and Random-based neural network simplification algorithms as the baselines for NNTBFV. Experiment results show that NNTBFV can effectively approximate the baseline in terms of simplification capability, and it surpasses the efficiency of the random-based method.
引用
收藏
页码:273 / 300
页数:28
相关论文
共 35 条
[1]  
Achiam J, 2024, Gpt-4 technical report, DOI DOI 10.48550/ARXIV.2303.08774
[2]  
Blalock D., 2020, PROC MACH LEARN SYST, V2, P129, DOI DOI 10.48550/ARXIV.2003.03033
[3]  
Bunel R, 2018, ADV NEUR IN, V31
[4]   Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks [J].
Ehlers, Ruediger .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :269-286
[5]  
Frankle J., 2019, INT C LEARN REPR
[6]  
Goodfellow I. J., 2014, 3 INT C LEARNING REP
[7]   A survey of deep learning techniques for autonomous driving [J].
Grigorescu, Sorin ;
Trasnea, Bogdan ;
Cocias, Tiberiu ;
Macesanu, Gigel .
JOURNAL OF FIELD ROBOTICS, 2020, 37 (03) :362-386
[8]   Flight Delay Prediction Based on Aviation Big Data and Machine Learning [J].
Gui, Guan ;
Liu, Fan ;
Sun, Jinlong ;
Yang, Jie ;
Zhou, Ziqi ;
Zhao, Dongxu .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (01) :140-150
[9]   Verification of Neural Networks: Enhancing Scalability Through Pruning [J].
Guidotti, Dario ;
Leofante, Francesco ;
Pulina, Luca ;
Tacchella, Armando .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :2505-2512
[10]  
Hu H., 2016, CORR