An On-chip Layer-wise Training Method for RRAM based Computing-in-memory Chips

被引:8
作者
Geng, Yiwen [1 ]
Gao, Bin [1 ]
Zhang, Qingtian [1 ]
Zhang, Wenqiang [1 ]
Yao, Peng [1 ]
Xi, Yue [1 ]
Lin, Yudeng [1 ]
Chen, Junren [1 ]
Tang, Jianshi [1 ]
Wu, Huaqiang [1 ]
Qian, He [1 ]
机构
[1] Tsinghua Univ, Beijing Innovat Ctr Future Chips ICFC, Inst Microelect, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021) | 2021年
关键词
RRAM; On-chip training; Computing in memory;
D O I
10.23919/DATE51398.2021.9473931
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RRAM-based computing-in-memory (CIM) chips have shown great potentials to accelerate deep neural networks on edge devices by reducing data transfer between the memory and the computing unit. However, due to the non-ideal characteristics of RRAM, the accuracy of the neural network on the RRAM chip is usually lower than the software. Here we propose an on-chip layer-wise training (LWT) method to alleviate the adverse effect of RRAM imperfections and improve the accuracy of the chip. Using a locally validated dataset, LWT can reduce the communication between the edge and the cloud, which benefits personalized data privacy. The simulation results on the CIFAR-10 dataset show that the LWT method can improve the accuracy of VGG-16 and ResNet-18 by more than 5% and 10%, respectively, with only 25% operations and 35% buffer compared with the back-propagation method. Moreover, the pipe-LWT method is presented to improve the throughput by three times further.
引用
收藏
页码:248 / 251
页数:4
相关论文
共 10 条
[1]   In-memory computing with resistive switching devices [J].
Ielmini, Daniele ;
Wong, H. -S. Philip .
NATURE ELECTRONICS, 2018, 1 (06) :333-343
[2]   Accurate deep neural network inference using computational phase-change memory [J].
Joshi, Vinay ;
Le Gallo, Manuel ;
Haefeli, Simon ;
Boybat, Irem ;
Nandakumar, S. R. ;
Piveteau, Christophe ;
Dazzi, Martino ;
Rajendran, Bipin ;
Sebastian, Abu ;
Eleftheriou, Evangelos .
NATURE COMMUNICATIONS, 2020, 11 (01)
[3]  
Lee J, 2018, ISSCC DIG TECH PAP I, P218, DOI 10.1109/ISSCC.2018.8310262
[4]   Analogue signal and image processing with large memristor crossbars [J].
Li, Can ;
Hu, Miao ;
Li, Yunning ;
Jiang, Hao ;
Ge, Ning ;
Montgomery, Eric ;
Zhang, Jiaming ;
Song, Wenhao ;
Davila, Noraica ;
Graves, Catherine E. ;
Li, Zhiyong ;
Strachan, John Paul ;
Lin, Peng ;
Wang, Zhongrui ;
Barnell, Mark ;
Wu, Qing ;
Williams, R. Stanley ;
Yang, J. Joshua ;
Xia, Qiangfei .
NATURE ELECTRONICS, 2018, 1 (01) :52-59
[5]  
Piet S, 2018, NAT NANOTECHNOL
[6]   Deep learning [J].
Rusk, Nicole .
NATURE METHODS, 2016, 13 (01) :35-35
[7]  
Wu W, 2018, 2018 IEEE SYMPOSIUM ON VLSI TECHNOLOGY, P103, DOI 10.1109/VLSIT.2018.8510690
[8]   Fully hardware-implemented memristor convolutional neural network [J].
Yao, Peng ;
Wu, Huaqiang ;
Gao, Bin ;
Tang, Jianshi ;
Zhang, Qingtian ;
Zhang, Wenqiang ;
Yang, J. Joshua ;
Qian, He .
NATURE, 2020, 577 (7792) :641-646
[9]   Neuro-inspired computing chips [J].
Zhang, Wenqiang ;
Gao, Bin ;
Tang, Jianshi ;
Yao, Peng ;
Yu, Shimeng ;
Chang, Meng-Fan ;
Yoo, Hoi-Jun ;
Qian, He ;
Wu, Huaqiang .
NATURE ELECTRONICS, 2020, 3 (07) :371-382
[10]  
Zhang Wenqiang, 2019, DAC