Fully hardware-implemented memristor convolutional neural network

被引:1571
作者
Yao, Peng [1 ]
Wu, Huaqiang [1 ,2 ]
Gao, Bin [1 ,2 ]
Tang, Jianshi [1 ,2 ]
Zhang, Qingtian [1 ]
Zhang, Wenqiang [1 ]
Yang, J. Joshua [3 ]
Qian, He [1 ,2 ]
机构
[1] Tsinghua Univ, Beijing Innovat Ctr Future Chips ICFC, Inst Microelect, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Beijing, Peoples R China
[3] Univ Massachusetts, Dept Elect & Comp Engn, Amherst, MA 01003 USA
基金
中国国家自然科学基金;
关键词
ANALOG; MEMORY;
D O I
10.1038/s41586-020-1942-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks(1-4). However, convolutional neural networks (CNNs)-one of the most important models for image recognition(5)-have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices(6-9). Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST10 image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing.
引用
收藏
页码:641 / 646
页数:6
相关论文
共 41 条
[1]   Equivalent-accuracy accelerated neural-network training using analogue memory [J].
Ambrogio, Stefano ;
Narayanan, Pritish ;
Tsai, Hsinyu ;
Shelby, Robert M. ;
Boybat, Irem ;
di Nolfo, Carmelo ;
Sidler, Severin ;
Giordano, Massimo ;
Bodini, Martina ;
Farinha, Nathan C. P. ;
Killeen, Benjamin ;
Cheng, Christina ;
Jaoudi, Yassine ;
Burr, Geoffrey W. .
NATURE, 2018, 558 (7708) :60-+
[2]  
[Anonymous], 2013, Proceedings of the 30th International Conference on Machine Learning, Cycle
[3]   Neuromorphic computing using non-volatile memory [J].
Burr, Geoffrey W. ;
Shelby, Robert M. ;
Sebastian, Abu ;
Kim, Sangbum ;
Kim, Seyoung ;
Sidler, Severin ;
Virwani, Kumar ;
Ishii, Masatoshi ;
Narayanan, Pritish ;
Fumarola, Alessandro ;
Sanches, Lucas L. ;
Boybat, Irem ;
Le Gallo, Manuel ;
Moon, Kibong ;
Woo, Jiyoo ;
Hwang, Hyunsang ;
Leblebici, Yusuf .
ADVANCES IN PHYSICS-X, 2017, 2 (01) :89-124
[4]   Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element [J].
Burr, Geoffrey W. ;
Shelby, Robert M. ;
Sidler, Severin ;
di Nolfo, Carmelo ;
Jang, Junwoo ;
Boybat, Irem ;
Shenoy, Rohit S. ;
Narayanan, Pritish ;
Virwani, Kumar ;
Giacometti, Emanuele U. ;
Kuerdi, Bulent N. ;
Hwang, Hyunsang .
IEEE TRANSACTIONS ON ELECTRON DEVICES, 2015, 62 (11) :3498-3507
[5]  
Cai Y, 2018, ASIA S PACIF DES AUT, P117, DOI 10.1109/ASPDAC.2018.8297292
[6]   Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks [J].
Chen, Yu-Hsin ;
Krishna, Tushar ;
Emer, Joel S. ;
Sze, Vivienne .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) :127-138
[7]   SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations [J].
Choi, Shinhyun ;
Tan, Scott H. ;
Li, Zefan ;
Kim, Yunjo ;
Choi, Chanyeol ;
Chen, Pai-Yu ;
Yeon, Hanwool ;
Yu, Shimeng ;
Kim, Jeehwan .
NATURE MATERIALS, 2018, 17 (04) :335-+
[8]   Light-tuned selective photosynthesis of azo- and azoxy-aromatics using graphitic C3N4 [J].
Dai, Yitao ;
Li, Chao ;
Shen, Yanbin ;
Lim, Tingbin ;
Xu, Jian ;
Li, Yongwang ;
Niemantsverdriet, Hans ;
Besenbacher, Flemming ;
Lock, Nina ;
Su, Ren .
NATURE COMMUNICATIONS, 2018, 9
[9]   Phase-change heterostructure enables ultralow noise and drift for memory operation [J].
Ding, Keyuan ;
Wang, Jiangjing ;
Zhou, Yuxing ;
Tian, He ;
Lu, Lu ;
Mazzarello, Riccardo ;
Jia, Chunlin ;
Zhang, Wei ;
Rao, Feng ;
Ma, Evan .
SCIENCE, 2019, 366 (6462) :210-+
[10]  
Donahue J, 2014, PR MACH LEARN RES, V32