Robustness Verification Boosting for Deep Neural Networks

被引:2
作者
Feng, Chendong [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp, Changsha, Peoples R China
来源
2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019) | 2019年
关键词
DNN; Robustness; Verification; Adversarial Example; Boosting;
D O I
10.1109/ICISCE48695.2019.00112
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Network (DNN) is a widely used deep learning technique, and ensuring the safety of DNN-based systems is a critical and challenging problem. Robustness is an important safety property of DNN. However, existing work of verifying DNN's robustness is time-consuming and hard to scale. In this paper, we propose a boosting method for the falsification in DNN robustness verification, which aims to find counter-examples earlier. Our observation is that different inputs to a DNN have different possibilities of existing counter-examples around them, and in particular, the input with a small difference between the largest and the second largest output values tends to be the Achilles heel of the DNN. We have implemented our method and applied it on two state-of-the-art DNN verification tools and four DNN attacking methods. The results of the experiments on two benchmarks indicate the effectiveness of our boosting method.
引用
收藏
页码:531 / 535
页数:5
相关论文
共 20 条
[1]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[2]   Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks [J].
Ehlers, Ruediger .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :269-286
[3]   AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation [J].
Gehr, Timon ;
Mirman, Matthew ;
Drachsler-Cohen, Dana ;
Tsankov, Petar ;
Chaudhuri, Swarat ;
Vechev, Martin .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, :3-18
[4]  
Goodfellow I J, 2014, Computer Science
[5]  
Gopinath D., 2018, CoRR
[6]   DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks [J].
Gopinath, Divya ;
Katz, Guy ;
Pasareanu, Corina S. ;
Barrett, Clark .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2018), 2018, 11138 :3-19
[7]   Deep Neural Networks for Acoustic Modeling in Speech Recognition [J].
Hinton, Geoffrey ;
Deng, Li ;
Yu, Dong ;
Dahl, George E. ;
Mohamed, Abdel-rahman ;
Jaitly, Navdeep ;
Senior, Andrew ;
Vanhoucke, Vincent ;
Patrick Nguyen ;
Sainath, Tara N. ;
Kingsbury, Brian .
IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) :82-97
[8]   Safety Verification of Deep Neural Networks [J].
Huang, Xiaowei ;
Kwiatkowska, Marta ;
Wang, Sen ;
Wu, Min .
COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 :3-29
[9]   Towards Proving the Adversarial Robustness of Deep Neural Networks [J].
Katz, Guy ;
Barrett, Clark ;
Dill, David L. ;
Julian, Kyle ;
Kochenderfer, Mykel J. .
ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257) :19-26
[10]   Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks [J].
Katz, Guy ;
Barrett, Clark ;
Dill, David L. ;
Julian, Kyle ;
Kochenderfer, Mykel J. .
COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 :97-117