Derailed: Arbitrarily Controlling DNN Outputs with Targeted Fault Injection Attacks

被引:0
作者
Ordonez, Jhon [1 ]
Yang, Chengmo [1 ]
机构
[1] Univ Delaware, Dept Elect & Comp Engn, Newark, DE 19716 USA
来源
2024 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE | 2024年
基金
美国国家科学基金会;
关键词
Fault injection; DNN accelerator; Clock glitching;
D O I
10.23919/DATE58400.2024.10546554
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Hardware accelerators have been widely deployed to improve the efficiency of DNN execution in terms of performance, power, and time predictability. Yet recent studies have shown that DNN accelerators are vulnerable to fault injection attacks, compromising their integrity and reliability. Classic fault injection attacks are capable of causing a high overall accuracy drop. However, one limitation is that they are difficult to control, as faults affect the computation across random classes. In comparison, this paper presents a controlled fault injection attack, capable of derailing arbitrary inputs to a targeted range of classes. Our observation is that the fully connected (FC) layers greatly impact inference results, whereas the computation in the FC layer is typically performed in order. Leveraging this fact, an adversary can perform a controlled fault injection attack even to a black-box DNN model. Specifically, this attack adopts a two-step search process that first identifies the time window during which the FC layer is computed and then pinpoints the targeted classes. This attack is implemented with clock glitching, and the target DNN accelerator is a DPU implemented in the FPGA. The attack is tested on three popular DNN models, namely, ResNet50, InceptionV1, and MobileNetV2. Results show that up to 93% of inputs are derailed to the attacker-specified classes, demonstrating its effectiveness.
引用
收藏
页数:6
相关论文
共 22 条
  • [11] Imperceptible Misclassification Attack on Deep Learning Accelerator by Glitch Injection
    Liu, Wenye
    Chang, Chip-Hong
    Zhang, Fan
    Lou, Xiaoxuan
    [J]. PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [12] Lou X., 2019, arXiv
  • [13] DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA
    Luo, Yukui
    Gongye, Cheng
    Fei, Yunsi
    Xu, Xiaolin
    [J]. 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 295 - 300
  • [14] Electrical-Level Attacks on CPUs, FPGAs, and GPUs: Survey and Implications in the Heterogeneous Era
    Mahmoud, Dina G.
    Lenders, Vincent
    Stojilovic, Mirjana
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (03)
  • [15] Exploring Image Selection for Self-Testing in Neural Network Accelerators
    Meng, Fanruo
    Yang, Chengmo
    [J]. 2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 345 - 350
  • [16] A survey on hardware security of DNN models and accelerators
    Mittal, Sparsh
    Gupta, Himanshi
    Srivastava, Srishti
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2021, 117
  • [17] Rakin A. S., 2022, IEEE Trans. on Pattern Analysis and Machine Intelligence
  • [18] MobileNetV2: Inverted Residuals and Linear Bottlenecks
    Sandler, Mark
    Howard, Andrew
    Zhu, Menglong
    Zhmoginov, Andrey
    Chen, Liang-Chieh
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4510 - 4520
  • [19] Sun R., 2021, arXiv
  • [20] Szegedy C, 2015, PROC CVPR IEEE, P1, DOI 10.1109/CVPR.2015.7298594