A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

被引:2
作者
Hector, Kevin [1 ,2 ]
Moellic, Pierre-Alain [1 ,2 ]
Dumont, Mathieu [1 ,2 ]
Dutertre, Jean-Max [3 ]
机构
[1] Mines St Etienne, CEA Tech, Ctr CMP, Equipe Commune CEA Tech, F-13541 Gardanne, France
[2] Univ Grenoble Alpes, CEA, Leti, F-38000 Grenoble, France
[3] Mines St Etienne, CEA, Leti, Ctr CMP, F-13541 Gardanne, France
来源
2022 IEEE 28TH INTERNATIONAL SYMPOSIUM ON ON-LINE TESTING AND ROBUST SYSTEM DESIGN (IOLTS 2022) | 2022年
关键词
Deep learning; Security; Fault Injection; Adversarial Attack; Robustness Evaluation;
D O I
10.1109/IOLTS56730.2022.9897693
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network models are massively deployed on a wide variety of hardware platforms. This results in the appearance of new attack vectors that significantly extend the standard attack surface, extensively studied by the adversarial machine learning community. One of the first attack that aims at drastically dropping the performance of a model by targeting its parameters stored in memory, is the Bit-Flip Attack (BFA). In this work, we point out several evaluation challenges related to the BFA. First, the lack of an adversary's budget in the standard threat model is problematic, especially when dealing with physical attacks. Moreover, since the BFA presents critical variability, we discuss the influence of some training parameters and the importance of the model architecture. This work is the first to present the impact of the BFA against fully-connected architectures that present different behaviors compared to convolutional neural networks. These results highlight the importance of defining robust and sound evaluation methodologies to properly evaluate the dangers of parameter-based attacks as well as measure the real level of robustness offered by a defense.
引用
收藏
页数:5
相关论文
共 20 条
[1]  
Agoyan Michel, 2010, 2010 IEEE 16th International On-Line Testing Symposium (IOLTS 2010), P235, DOI 10.1109/IOLTS.2010.5560194
[2]  
[Anonymous], 2015, PROC INT C LEARNING
[3]  
Athalye A, 2018, PR MACH LEARN RES, V80
[4]   POSTER: Practical Fault Attack on Deep Neural Networks [J].
Breier, Jakub ;
Hou, Xiaolu ;
Jap, Dirmanto ;
Ma, Lei ;
Bhasin, Shivam ;
Liu, Yang .
PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, :2204-2206
[5]  
Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Carlini N, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P3, DOI 10.1145/3128572.3140444
[8]   Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks [J].
Cherupally, Sai Kiran ;
Rakin, Adnan Siraj ;
Yin, Shihui ;
Seok, Mingoo ;
Fan, Deliang ;
Seo, Jae-sun .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :559-564
[9]  
Dumont Mathieu, 2021, 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), P616, DOI 10.1109/WF-IoT51360.2021.9595075
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778