Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks

被引:0
作者
Liu Y. [1 ,2 ]
Yang P.-F. [1 ,4 ]
Zhang L.-J. [1 ,2 ]
Wu Z.-L. [1 ,2 ]
Feng Y. [3 ]
机构
[1] State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing
[2] University of Chinese Academy of Sciences, Beijing
[3] University of Technology Sydney, Sydney
[4] Pazhou Lab, Guangzhou
来源
Ruan Jian Xue Bao/Journal of Software | 2023年 / 34卷 / 07期
关键词
artificial intelligence security; formal method; intelligent system; neural network; robustness;
D O I
10.13328/j.cnki.jos.006863
中图分类号
学科分类号
摘要
With the advent of the intelligent age, the applications of intelligent systems equipped with deep neural networks (DNNs) have penetrated into every aspect of our life. However, due to the black-box and large-scale characteristics, the predictions of the neural networks are difficult to be completely convincing. When neural networks are applied to security-critical fields such as autonomous driving, how to ensure their security is still a great challenge for the academia and industry. For this reason, the academia carried out much research on robustness—a kind of special security of neural networks, and proposed many algorithms for robustness analysis and verification. The verification algorithms for feedforward neural networks (FNNs) include precise algorithms and approximate algorithms, which have been developed relatively prosperously. The verification algorithms for other types of networks, such as recurrent neural networks (RNNs), are still in the primary stage. This study reviews the current development of DNNs and the challenges of deploying them into our life. It also exhaustively investigates the robustness verification algorithms of FNNs and RNNs, analyzes and compares the intrinsic connection among these algorithms. The security verification algorithms of RNNs in specific application scenarios are investigated, and the future research directions in the robustness verification field of neural networks are clarified. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1 / 33
页数:32
相关论文
共 164 条
[61]  
Vengertsev D, Sherman E., Recurrent neural network properties and their verification with Monte Carlo techniques, Proc. of the 2020 CEUR Workshop, pp. 178-185, (2020)
[62]  
Mayr F, Yovine S, Visca R., Property checking with interpretable error characterization for recurrent neural networks, Machine Learning and Knowledge Extraction, 3, 1, pp. 205-227, (2021)
[63]  
Khmelnitsky I, Neider D, Roy R, Xie X, Barbot B, Bollig B, Finkel A, Haddad S, Leucker M, Ye L., Property-directed verification and robustness certification of recurrent neural networks, Proc. of the 19th Int’l Symp. on Automated Technology for Verification and Analysis, pp. 364-380, (2021)
[64]  
Ruan WJ, Huang XW, Kwiatkowska M., Reachability analysis of deep neural networks with provable guarantees, Proc. of the 27th Int’l Joint Conf. on Artificial Intelligence, pp. 2651-2659, (2018)
[65]  
Bunel R, Turkaslan I, Torr PHS, Kohli P, Kumar MP., A unified view of piecewise linear neural network verification, Proc. of the 32nd Int’l Conf. on Neural Information Processing Systems, pp. 4795-4804, (2018)
[66]  
Fischetti M, Jo J., Deep neural networks as 0-1 mixed integer linear programs: A feasibility study, (2017)
[67]  
Fischetti M, Jo J., Deep neural networks and mixed integer linear optimization, Constraints, 23, 3, pp. 296-309, (2018)
[68]  
Zhang YD, Zhao Z, Chen GK, Song F, Chen TL., BDD4BNN: A BDD-based quantitative analysis framework for binarized neural networks, Proc. of the 33rd Int’l Conf. on Computer Aided Verification, pp. 175-200, (2021)
[69]  
Dvijotham K, Gowal S, Stanforth R, Arandjelovic R, O'Donoghue B, Uesato J, Kohli P., Training verified learners with learned verifiers, (2018)
[70]  
Dvijotham K, Stanforth R, Gowal S, Mann T, Kohli P., A dual approach to scalable verification of deep networks, (2018)