DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

被引:0
|
作者
Yao, Fan [1 ]
Rakin, Adnan Siraj [2 ]
Fan, Deliang [2 ]
机构
[1] Univ Cent Florida, Orlando, FL 32816 USA
[2] Arizona State Univ, Tempe, AZ 85287 USA
基金
美国国家科学基金会;
关键词
HARDWARE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains. Many prior studies have shown external attacks such as adversarial examples that tamper the integrity of DNNs using maliciously crafted inputs. However, the security implication of internal threats (i.e., hardware vulnerabilities) to DNN models has not yet been well understood. In this paper, we demonstrate the first hardware-based attack on quantized deep neural networks-DeepHammer-that deterministically induces bit flips in model weights to compromise DNN inference by exploiting the rowhammer vulnerability. DeepHammer performs an aggressive bit search in the DNN model to identify the most vulnerable weight bits that are flippable under system constraints. To trigger deterministic bit flips across multiple pages within a reasonable amount of time, we develop novel system-level techniques that enable fast deployment of victim pages, memory-efficient rowhammering and precise flipping of targeted bits. DeepHammer can deliberately degrade the inference accuracy of the victim DNN system to a level that is only as good as random guess, thus completely depleting the intelligence of targeted DNN systems. We systematically demonstrate our attacks on real systems against 11 DNN architectures with 4 datasets corresponding to different application domains. Our evaluation shows that DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes. We further discuss several mitigation techniques from both algorithm and system levels to protect DNNs against such attacks. Our work highlights the need to incorporate security mechanisms in future machine learning systems to enhance the robustness of DNN against hardware-based deterministic fault injections.
引用
收藏
页码:1463 / 1480
页数:18
相关论文
共 50 条
  • [31] Towards Low-bit Quantization of Deep Neural Networks with Limited Data
    Yuan, Yong
    Chen, Chen
    Hu, Xiyuan
    Peng, Silong
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4377 - 4384
  • [32] Training Deep Neural Networks with 8-bit Floating Point Numbers
    Wang, Naigang
    Choi, Jungwook
    Brand, Daniel
    Chen, Chia-Yu
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [33] Classification of indoor actions through deep neural networks
    Augello, Agnese
    Maniscalco, Umberto
    Vella, Filippo
    Bentivenga, Vincenzo
    Gaglio, Salvatore
    2016 12TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS (SITIS), 2016, : 82 - 87
  • [34] Photographic Heritage Restoration Through Deep Neural Networks
    Tits, Michael
    Boukhebouze, Mohamed
    Ponsard, Christophe
    ERCIM NEWS, 2020, (123): : 35 - 35
  • [35] Learning When to Kick through Deep Neural Networks
    Melo, Dicksiano Carvalho
    Forster, Carlos H. Q.
    Maximo, Marcos R. O. A.
    2019 LATIN AMERICAN ROBOTICS SYMPOSIUM, 2019 BRAZILIAN SYMPOSIUM ON ROBOTICS (SBR) AND 2019 WORKSHOP ON ROBOTICS IN EDUCATION (LARS-SBR-WRE 2019), 2019, : 43 - 48
  • [36] Stereo Matching through Squeeze Deep Neural Networks
    Caffaratti, Gabriel D.
    Marehetta, Martin G.
    Forradellas, Raymundo Q.
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAN JOURNAL OF ARTIFICIAL INTELLIGENCE, 2019, 22 (63): : 16 - 38
  • [37] Interpreting Deep Neural Networks through Prototype Factorization
    Das, Subhajit
    Xu, Panpan
    Dai, Zeng
    Endert, Alex
    Ren, Liu
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2020), 2020, : 448 - 457
  • [38] Deep Neural Networks for seeing through multimode fibers
    Kakkava, Eirini
    Borhani, Navid
    Moser, Christophe
    Psaltis, Demetri
    HIGH-SPEED BIOMEDICAL IMAGING AND SPECTROSCOPY IV, 2019, 10889
  • [39] UNDERSTANDING DEEP NEURAL NETWORKS THROUGH INPUT UNCERTAINTIES
    Thiagarajan, Jayaraman J.
    Kim, Irene
    Anirudh, Rushil
    Bremer, Peer-Timo
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2812 - 2816
  • [40] Mixed-Signal Charge-Domain Acceleration of Deep Neural Networks through Interleaved Bit-Partitioned Arithmetic
    Ghodrati, Soroush
    Sharma, Hardik
    Kinzer, Sean
    Yazdanbakhsh, Amir
    Park, Jongse
    Kim, Nam Sung
    Burger, Doug
    Esmaeilzadeh, Hadi
    PACT '20: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, 2020, : 399 - 411