Exploring Model Stability of Deep Neural Networks for Reliable RRAM-Based In-Memory Acceleration

被引:5
作者
Krishnan, Gokul [1 ]
Yang, Li [1 ]
Sun, Jingbo [1 ]
Hazra, Jubin [2 ]
Du, Xiaocong [1 ]
Liehr, Maximilian [2 ]
Li, Zheng [1 ]
Beckmann, Karsten [2 ]
Joshi, Rajiv, V [3 ]
Cady, Nathaniel C. [2 ]
Fan, Deliang [1 ]
Cao, Yu [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85287 USA
[2] State Univ New York Polytech, Albany, NY 12246 USA
[3] IBM Corp, TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
关键词
Stability analysis; Computational modeling; Quantization (signal); Semiconductor device modeling; Training; Perturbation methods; Neural networks; In-memory computing; RRAM; model stability; deep neural networks; reliability; pruning; quantization;
D O I
10.1109/TC.2022.3174585
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
RRAM-based in-memory computing (IMC) effectively accelerates deep neural networks (DNNs). Furthermore, model compression techniques, such as quantization and pruning, are necessary to improve algorithm mapping and hardware performance. However, in the presence of RRAM device variations, low-precision and sparse DNNs suffer from severe post-mapping accuracy loss. To address this, in this work, we investigate a new metric, model stability, from the loss landscape to help shed light on accuracy loss under variations and model compression, which guides an algorithmic solution to maximize model stability and mitigate accuracy loss. Based on statistical data from a CMOS/RRAM 1T1R test chip at 65nm, we characterize wafer-level RRAM variations and develop a cross-layer benchmark tool that incorporates quantization, pruning, device variations, model stability, and IMC architecture parameters to assess post-mapping accuracy and hardware performance. Leveraging this tool, we show that a loss-landscape-based DNN model selection for stability effectively tolerates device variations and achieves a post-mapping accuracy higher than that with 50% lower RRAM variations. Moreover, we quantitatively interpret why model pruning increases the sensitivity to variations, while a lower-precision model has better tolerance to variations. Finally, we propose a novel variation-aware training method to improve model stability, in which there exists the most stable model for the best post-mapping accuracy of compressed DNNs. Experimental evaluation of the method shows up to 19%, 21%, and 11% post-mapping accuracy improvement for our 65nm RRAM device, across various precision and sparsity, on CIFAR-10, CIFAR-100, and SVHN datasets, respectively.
引用
收藏
页码:2740 / 2752
页数:13
相关论文
共 41 条
[21]   Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs [J].
Krishnan, Gokul ;
Mandal, Sumit K. ;
Chakrabarti, Chaitali ;
Seo, Jae-sun ;
Ogras, Umit Y. ;
Cao, Yu .
IEEE DESIGN & TEST, 2020, 37 (06) :79-87
[22]  
LI H, 2018, ADV NEURAL INFORM PR, P6389
[23]  
Li H, 2017, ADV NEUR IN, V30
[24]   Crossbar-Aware Neural Network Pruning [J].
Liang, Ling ;
Deng, Lei ;
Zeng, Yueling ;
Hu, Xing ;
Ji, Yu ;
Ma, Xin ;
Li, Guoqi ;
Xie, Yuan .
IEEE ACCESS, 2018, 6 :58324-58337
[25]  
Lin M., 2020, ADV NEURAL INFORM PR, P7474
[26]  
Liu BY, 2014, ICCAD-IEEE ACM INT, P63, DOI 10.1109/ICCAD.2014.7001330
[27]   Learning Efficient Convolutional Networks through Network Slimming [J].
Liu, Zhuang ;
Li, Jianguo ;
Shen, Zhiqiang ;
Huang, Gao ;
Yan, Shoumeng ;
Zhang, Changshui .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2755-2763
[28]  
Long Y, 2019, DES AUT TEST EUROPE, P1769, DOI [10.23919/date.2019.8715178, 10.23919/DATE.2019.8715178]
[29]  
Ma C, 2020, DES AUT TEST EUROPE, P1432, DOI 10.23919/DATE48585.2020.9116555
[30]   A Latency-Optimized Reconfigurable NoC for In-Memory Acceleration of DNNs [J].
Mandal, Sumit K. ;
Krishnan, Gokul ;
Chakrabarti, Chaitali ;
Seo, Jae-Sun ;
Cao, Yu ;
Ogras, Umit Y. .
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2020, 10 (03) :362-375