Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks

被引:0
作者
Liu Y. [1 ,2 ]
Yang P.-F. [1 ,4 ]
Zhang L.-J. [1 ,2 ]
Wu Z.-L. [1 ,2 ]
Feng Y. [3 ]
机构
[1] State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing
[2] University of Chinese Academy of Sciences, Beijing
[3] University of Technology Sydney, Sydney
[4] Pazhou Lab, Guangzhou
来源
Ruan Jian Xue Bao/Journal of Software | 2023年 / 34卷 / 07期
关键词
artificial intelligence security; formal method; intelligent system; neural network; robustness;
D O I
10.13328/j.cnki.jos.006863
中图分类号
学科分类号
摘要
With the advent of the intelligent age, the applications of intelligent systems equipped with deep neural networks (DNNs) have penetrated into every aspect of our life. However, due to the black-box and large-scale characteristics, the predictions of the neural networks are difficult to be completely convincing. When neural networks are applied to security-critical fields such as autonomous driving, how to ensure their security is still a great challenge for the academia and industry. For this reason, the academia carried out much research on robustness—a kind of special security of neural networks, and proposed many algorithms for robustness analysis and verification. The verification algorithms for feedforward neural networks (FNNs) include precise algorithms and approximate algorithms, which have been developed relatively prosperously. The verification algorithms for other types of networks, such as recurrent neural networks (RNNs), are still in the primary stage. This study reviews the current development of DNNs and the challenges of deploying them into our life. It also exhaustively investigates the robustness verification algorithms of FNNs and RNNs, analyzes and compares the intrinsic connection among these algorithms. The security verification algorithms of RNNs in specific application scenarios are investigated, and the future research directions in the robustness verification field of neural networks are clarified. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1 / 33
页数:32
相关论文
共 164 条
[41]  
Ehlers R., Formal verification of piece-wise linear feed-forward neural networks, Proc. of the 15th Int’l Symp. on Automated Technology for Verification and Analysis, pp. 269-286, (2017)
[42]  
Gopinath D, Katz G, Pasareanu CS, Barrett C., DeepSafe: A data-driven approach for checking adversarial robustness in neural networks, (2020)
[43]  
Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu HZ, Zeljic A, Dill DL, Kochenderfer MJ, Barrett C., The marabou framework for verification and analysis of deep neural networks, Proc. of the 31st Int’l Conf. on Computer Aided Verification, pp. 443-452, (2019)
[44]  
Tjeng V, Xiao K, Tedrake R., Evaluating robustness of neural networks with mixed integer programming, (2019)
[45]  
Dutta S, Jha S, Sankaranarayanan S, Tiwari A., Output range analysis for deep feedforward neural networks, Proc. of the 10th Int’l Symp. on NASA Formal Methods, pp. 121-138, (2018)
[46]  
Wong E, Kolter Z., Provable defenses against adversarial examples via the convex outer adversarial polytope, Proc. of the 35th Int’l Conf. on Machine Learning, pp. 5286-5295, (2018)
[47]  
Muller MN, Makarchuk G, Singh G, Puschel M, Vechev MT., PRIMA: General and precise neural network certification via scalable convex hull approximations, Proc. of the ACM on Programming Languages, 6, (2022)
[48]  
Anderson G, Pailoor S, Dillig I, Chaudhuri S., Optimization and abstraction: A synergistic approach for analyzing neural network robustness, Proc. of the 40th ACM SIGPLAN Conf. on Programming Language Design and Implementation, pp. 731-744, (2019)
[49]  
Yang PF, Li RJ, Li JL, Huang CC, Wang JY, Sun J, Xue B, Zhang LJ., Improving neural network verification through spurious region guided refinement, Proc. of the 27th Int’l Conf. on Tools and Algorithms for the Construction and Analysis of Systems, pp. 389-408, (2021)
[50]  
Ashok P, Hashemi V, Kretinsky J, Mohr S., DeepAbstract: Neural network abstraction for accelerating verification, Proc. of the 18th Int’l Symp. on Automated Technology for Verification and Analysis, pp. 92-107, (2020)