Safeguarding the Intelligence of Neural Networks with Built-in Light-weight Integrity MArks (LIMA)

被引:4
作者
Hosseini, Fateme S. [1 ]
Liu, Qi [2 ]
Meng, Fanruo [1 ]
Yang, Chengmo [1 ]
Wen, Wujie [2 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
[2] Lehigh Univ, Bethlehem, PA 18015 USA
来源
2021 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST) | 2021年
基金
美国国家科学基金会;
关键词
D O I
10.1109/HOST49136.2021.9702292
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As Deep Neural Networks (DNNs) are widely adopted in many real-world applications, their integrity becomes critical. Unfortunately, DNN models are not resilient to fault injection attacks. In particular, recent work has shown that Bit-Flip Attack (BFA) can completely destroy the intelligence of DNNs with a few carefully injected bit-flips. To defend against this threat, we propose Light-weight Integrity MArks (LIMA) framework which protects the integrity of the most significant bits (MSBs) of DNN weights - the main target of BFA. Such protection is enabled by embedding specific property into a trained DNN model's weights before deploying it in hardware. LIMA outperforms existing BFA countermeasures as it requires no retraining, imposes no storage overhead, offers full-coverage of all DNN layers, and can be easily verified with Multiply-Accumulate (MAC) operations to detect BFA. Our comprehensive study demonstrates 100% effectiveness in detecting chains of bit-flips and near-zero accuracy loss for embedding LIMA. The eresults also show that even when the attacker has complete knowledge of the proposed defense plan, attacking DNNs with built-in LIMA is extremely difficult, if not completely impossible.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 40 条
  • [1] Big hopes for big data
    不详
    [J]. NATURE MEDICINE, 2020, 26 (01) : 1 - 1
  • [2] [Anonymous], 2017, ARXIV170904114
  • [3] POSTER: Practical Fault Attack on Deep Neural Networks
    Breier, Jakub
    Hou, Xiaolu
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 2204 - 2206
  • [4] DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving
    Chen, Chenyi
    Seff, Ari
    Kornhauser, Alain
    Xiao, Jianxiong
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2722 - 2730
  • [5] Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks
    Cojocar, Lucian
    Razavi, Kaveh
    Giuffrida, Cristiano
    Bos, Herbert
    [J]. 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 55 - 71
  • [6] X-DeepSCA: Cross-Device Deep Learning Side Channel Attack
    Das, Debayan
    Golder, Anupam
    Danial, Josef
    Ghosh, Santosh
    Raychowdhury, Arijit
    Sen, Shreyas
    [J]. PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] Goodfellow IJ, 2015, 3 INT C LEARN REPR I
  • [9] Another Flip in the Wall of Rowhammer Defenses
    Gruss, Daniel
    Lipp, Moritz
    Schwarz, Michael
    Genkin, Daniel
    Juffinger, Jonas
    O'Connell, Sioli
    Schoechl, Wolfgang
    Yarom, Yuval
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 245 - 261
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778