Evaluation of Parameter-Based Attacks Against Embedded Neural Networks with Laser Injection

被引:0
作者
Dumont, Mathieu [1 ,2 ]
Hector, Kevin [1 ,2 ]
Moellic, Pierre-Alain [1 ,2 ]
Dutertre, Jean-Max [3 ]
Pontie, Simon [1 ,2 ]
机构
[1] CEA Tech, Ctr CMP, Equipe Commune CEA Tech Mines St Etienne, F-13541 Gardanne, France
[2] Univ Grenoble Alpes, CEA, Leti, F-38000 Grenoble, France
[3] CEA, Ctr CMP, Mines St Etienne, Leti, F-13541 Gardanne, France
来源
COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023 | 2023年 / 14181卷
关键词
Hardware Security; Fault Injection; Evaluation and certification; Machine Learning; Neural Network; DNN;
D O I
10.1007/978-3-031-40923-3_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms. Until recently, most of research works focused on API-based attacks that consider a ML model as a pure algorithmic abstraction. However, new implementation-based threats have been revealed, emphasizing the urgency to propose both practical and simulation-based methods to properly evaluate the robustness of models. A major concern is parameter-based attacks (such as the Bit-Flip Attack - BFA) that highlight the lack of robustness of typical deep neural network models when confronted by accurate and optimal alterations of their internal parameters stored in memory. Setting in a security testing purpose, this work practically reports, for the first time, a successful variant of the BFA on a 32-bit Cortex-M microcontroller using laser fault injection. It is a standard fault injection means for security evaluation, that enables to inject spatially and temporally accurate faults. To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.
引用
收藏
页码:258 / 271
页数:14
相关论文
共 19 条
[1]  
Barenghi A, 2012, P IEEE, V100, P3056, DOI 10.1109/JPROC.2012.2188769
[2]  
Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
[3]  
Colombier B, 2019, PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), P1, DOI 10.1109/HST.2019.8741030
[4]   Defending and Harnessing the Bit-Flip based Adversarial Weight Attack [J].
He, Zhezhi ;
Rakin, Adnan Siraj ;
Li, Jingtao ;
Chakrabarti, Chaitali ;
Fan, Deliang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14083-14091
[5]   A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks [J].
Hector, Kevin ;
Moellic, Pierre-Alain ;
Dumont, Mathieu ;
Dutertre, Jean-Max .
2022 IEEE 28TH INTERNATIONAL SYMPOSIUM ON ON-LINE TESTING AND ROBUST SYSTEM DESIGN (IOLTS 2022), 2022,
[6]   Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection [J].
Hou, Xiaolu ;
Breier, Jakub ;
Jap, Dirmanto ;
Ma, Lei ;
Bhasin, Shivam ;
Liu, Yang .
2020 IEEE INTERNATIONAL SYMPOSIUM ON THE PHYSICAL AND FAILURE ANALYSIS OF INTEGRATED CIRCUITS (IPFA), 2020,
[7]   HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks [J].
Javaheripi, Mojan ;
Koushanfar, Farinaz .
2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
[8]   A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters [J].
Joud, Raphael ;
Moellic, Pierre-Alain ;
Pontie, Simon ;
Rigaud, Jean-Baptiste .
SMART CARD RESEARCH AND ADVANCED APPLICATIONS, CARDIS 2022, 2023, 13820 :45-65
[9]   Generating Robust DNN With Resistance to Bit-Flip Based Adversarial Weight Attack [J].
Liu, Liang ;
Guo, Yanan ;
Cheng, Yueqiang ;
Zhang, Youtao ;
Yang, Jun .
IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (02) :401-413
[10]   Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [J].
Liu, Ye ;
Cheng, Yaya ;
Gao, Lianli ;
Liu, Xianglong ;
Zhang, Qilong ;
Song, Jingkuan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :15084-15093