Neuron importance based verification of neural networks via divide and conquer

被引:0
作者
Dong, Yansong [1 ]
Liu, Yuehao [1 ]
Zhao, Liang [1 ]
Tian, Cong [1 ]
Duan, Zhenhua [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, 2 South Taibai Rd, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural networks; Model abstraction; Divide and conquer; Formal verification; Neuron importance;
D O I
10.1016/j.neucom.2023.126995
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are widely used in various applications such as image classification, speech recognition and natural language processing. As intelligent systems often rely on neural networks to make critical decisions, it is crucial to ensure that the behavior of a neural network is trustworthy. Along with the increase in the scale and complexity of neural networks, it becomes a challenging problem to verify their safety, since the large-scale state space may make the verification of safety properties infeasible or unsolvable. In order to achieve efficient safety verification of neural networks, this paper proposes a divide-and-conquer verification method based on neuron importance analysis. The method converts the safety constraints to be verified into an optimization problem composed of output neurons, and recursively measures the influence of neurons in the network on the final objective function to calculate their importance. The neurons with the greatest importance are split to refine the constraints, so that the original verification problem is transformed into a set of sub-problems for solution. The method is implemented as a tool named NIAVerify, which is evaluated through experiments of property verification on ReLU-based fully-connected benchmark networks trained on two classic datasets. Experimental results show that NIAVerify can achieve significant state space reduction and the verification efficiency is 44.80% higher than the existing verification method.
引用
收藏
页数:12
相关论文
共 57 条
  • [1] Akintunde ME, 2018, SIXTEENTH INTERNATIONAL CONFERENCE ON PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING, P184
  • [2] Software Engineering for Machine Learning: A Case Study
    Amershi, Saleema
    Begel, Andrew
    Bird, Christian
    DeLine, Robert
    Gall, Harald
    Kamar, Ece
    Nagappan, Nachiappan
    Nushi, Besmira
    Zimmermann, Thomas
    [J]. 2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2019), 2019, : 291 - 300
  • [3] Strong mixed-integer programming formulations for trained neural networks
    Anderson, Ross
    Huchette, Joey
    Ma, Will
    Tjandraatmadja, Christian
    Vielma, Juan Pablo
    [J]. MATHEMATICAL PROGRAMMING, 2020, 183 (1-2) : 3 - 39
  • [4] Athalye A, 2018, PR MACH LEARN RES, V80
  • [5] Improved Geometric Path Enumeration for Verifying ReLU Neural Networks
    Bak, Stanley
    Hoang-Dung Tran
    Hobbs, Kerianne
    Johnson, Taylor T.
    [J]. COMPUTER AIDED VERIFICATION (CAV 2020), PT I, 2020, 12224 : 66 - 96
  • [6] Bastani O, 2016, ADV NEUR IN, V29
  • [7] Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
  • [8] Botoeva E, 2020, AAAI CONF ARTIF INTE, V34, P3291
  • [9] Changliu Liu, 2021, Foundations and Trends in Optimization, V4, P244, DOI 10.1561/2400000035
  • [10] Dathathri S., 2020, Advances in Neural Information Processing Systems, V33, P5318