Abstract Layer for LeakyReLU for Neural Network Verification Based on Abstract Interpretation

被引:12
作者
El Mellouki, Omar [1 ]
Khedher, Mohamed Ibn [2 ]
El-Yacoubi, Mounim A. [3 ]
机构
[1] ENSTA IP Paris, Palaiseau, France
[2] IRT SystemX, Palaiseau, France
[3] Inst Polytech Paris, Samovar, CNRS, Telecom SudParis, Palaiseau, France
关键词
Neural networks; Robustness; Perturbation methods; Transformers; Task analysis; Optimization; Deep learning; Neural network verification; robustness; abstract interpretation; abstract transformer; LeakyReLU;
D O I
10.1109/ACCESS.2023.3263145
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks have been widely used in several complex tasks such as robotics, self-driving cars, medicine, etc. However, they have recently shown to be vulnerable in uncertain environments where inputs are noisy. As a consequence, the robustness of neural networks has become an essential property for their application in critical systems. Robustness is the capacity to take the same decision even when inputs are disturbed under different types of perturbations, including adversarial attacks. The great difficulty today is providing a formal guarantee of robustness, which is the context of this paper. To do so, abstract interpretation, a popular state-of-the-art method, consisting of converting the layers of the neural network into abstract layers, has been recently proposed. An abstract layer can act on a geometric abstract object or shape comprising implicitly an infinite number of inputs rather than an individual input. In this paper, we propose a new mathematical formulation of an abstract transformer to convert a LeakyReLU activation layer to an abstract layer. Moreover, we implement and integrate our transformer into the ERAN tool. For validation, we assess the performance of our transformer according to the LeakyReLU hyperparameter, and we study the robustness of the neural network according to the input perturbation intensity. Our approach is evaluated on three different datasets: MNIST, Fashion and a robotic dataset. The obtained results demonstrate the efficacy of our abstract transformer in terms of mathematical formulation and implementation.
引用
收藏
页码:33401 / 33413
页数:13
相关论文
共 33 条
[1]  
Bunel R, 2018, ADV NEUR IN, V31
[2]  
Cousot P, 2008, LECT NOTES COMPUT SC, V4171, P189
[3]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[4]  
Dvijotham K, 2018, UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, P550
[5]   Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks [J].
Ehlers, Ruediger .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :269-286
[6]  
Gehr T., ETH LIB NUMERICAL AN
[7]  
Gehr T., ETH SRIERAN ERAN
[8]   AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation [J].
Gehr, Timon ;
Mirman, Matthew ;
Drachsler-Cohen, Dana ;
Tsankov, Petar ;
Chaudhuri, Swarat ;
Vechev, Martin .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, :3-18
[9]   Dynamic and Scalable Deep Neural Network Verification Algorithm [J].
Ibn Khedher, Mohamed ;
Ibn-Khedher, Hatem ;
Hadji, Makhlouf .
ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, :1122-1130
[10]   Analyzing Adversarial Attacks against Deep Learning for Robot Navigation [J].
Ibn Khedher, Mohamed ;
Rezzoug, Mehdi .
ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, :1114-1121