Enabling Secure NVM-Based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption

被引:17
作者
Cai, Yi [1 ]
Chen, Xiaoming [2 ]
Tian, Lu [3 ]
Wang, Yu [1 ]
Yang, Huazhong [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
[3] Xilinx Inc, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial neural networks; Nonvolatile memory; Encryption; Computational modeling; Hardware; Non-volatile memory (NVM); compute-in-memory (CIM); neural network; security; encryption; ATTACKS;
D O I
10.1109/TC.2020.3017870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CIM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an adversary with physical access to extract the well-trained NN models. The goal of this article is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. Moreover, we improve the SFGE method by incrementally generating the encryption keys. Additionally, we provide variants of the encryption method to better fit quantized models and various mapping strategies. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.
引用
收藏
页码:1596 / 1610
页数:15
相关论文
共 44 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]   Equivalent-accuracy accelerated neural-network training using analogue memory [J].
Ambrogio, Stefano ;
Narayanan, Pritish ;
Tsai, Hsinyu ;
Shelby, Robert M. ;
Boybat, Irem ;
di Nolfo, Carmelo ;
Sidler, Severin ;
Giordano, Massimo ;
Bodini, Martina ;
Farinha, Nathan C. P. ;
Killeen, Benjamin ;
Cheng, Christina ;
Jaoudi, Yassine ;
Burr, Geoffrey W. .
NATURE, 2018, 558 (7708) :60-+
[3]  
[Anonymous], 2015, VERY DEEP CONVOLUTIO
[4]  
[Anonymous], 2013, The design of Rijndael: AES-the advanced encryption standard
[5]  
Batina L., 2018, arXiv:1810.09076
[6]  
Beckmann K, 2016, MRS ADV, V1, P3355, DOI 10.1557/adv.2016.377
[7]  
Cai Y., 2018, PROC IEEEIFIP NETW O, P1, DOI DOI 10.1109/NOMS.2018.8406159
[8]   Low Bit-Width Convolutional Neural Network on RRAM [J].
Cai, Yi ;
Tang, Tianqi ;
Xia, Lixue ;
Li, Boxun ;
Wang, Yu ;
Yang, Huazhong .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (07) :1414-1427
[9]  
Chen WH, 2018, ISSCC DIG TECH PAP I, P494, DOI 10.1109/ISSCC.2018.8310400
[10]   Novel Ultra-Low Power RRAM with Good Endurance and Retention [J].
Cheng, C. H. ;
Chin, Albert ;
Yeh, F. S. .
2010 SYMPOSIUM ON VLSI TECHNOLOGY, DIGEST OF TECHNICAL PAPERS, 2010, :85-+