An Analog Neural Network Computing Engine Using CMOS-Compatible Charge-Trap-Transistor (CTT)

被引:29
作者
Du, Yuan [1 ,2 ]
Du, Li [1 ,2 ]
Gu, Xuefeng [3 ]
Du, Jieqiong [1 ]
Wang, X. Shawn [1 ]
Hu, Boyu [1 ]
Jiang, Mingzhe [2 ]
Chen, Xiaoliang [4 ]
Iyer, Subramanian S. [3 ]
Chang, Mau-Chung Frank [1 ,5 ]
机构
[1] Univ Calif Los Angeles, High Speed Elect Lab, Los Angeles, CA 90095 USA
[2] Kneron Inc, Hardware Res Dept, San Diego, CA 92121 USA
[3] Univ Calif Los Angeles, Elect Engn Dept, Ctr Heterogeneous Integrat & Performance Scaling, Los Angeles, CA 90095 USA
[4] Univ Calif Irvine, Dept Elect & Comp Engn, Irvine, CA 92697 USA
[5] Natl Chiao Tung Univ, Hsinchu 30010, Taiwan
关键词
Analog computing engine; artificial neural networks; charge-trap transistors (CTTs); fully connected neural networks (FCNNs);
D O I
10.1109/TCAD.2018.2859237
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
An analog neural network computing engine based on CMOS-compatible charge-trap transistor (CTT) is proposed in this paper. CTT devices are used as analog multipliers. Compared to digital multipliers, CTT-based analog multiplier shows significant area and power reduction. The proposed computing engine is composed of a scalable CTT multiplier array and energy efficient analog-digital interfaces. By implementing the sequential analog fabric, the engine's mixed-signal interfaces are simplified and hardware overhead remains constant regardless of the size of the array. A proof-of-concept 784 by 784 CTT computing engine is implemented using TSMC 28-nm CMOS technology and occupies 0.68 mm(2). The simulated performance achieves 76.8 TOPS (8-bit) with 500 MHz clock frequency and consumes 14.8 mW. As an example, we utilize this computing engine to address a classic pattern recognition problem-classifying handwritten digits on MNIST database and obtained a performance comparable to state-of-the-art fully connected neural networks using 8-bit fixed-point resolution.
引用
收藏
页码:1811 / 1819
页数:9
相关论文
共 40 条
[1]  
[Anonymous], 2015, ICLR
[2]  
[Anonymous], P IEEE INT REL PHYS
[3]  
[Anonymous], PROC CVPR IEEE
[4]  
[Anonymous], 2015, MISCELLANEOUS
[5]  
[Anonymous], 2014, S VLSI CIRCUITS DIG
[6]  
[Anonymous], PROC CVPR IEEE
[7]  
[Anonymous], ADV NEURAL INFORM PR
[8]   Introduction to Flash memory [J].
Bez, R ;
Camerlenghi, E ;
Modelli, A ;
Visconti, A .
PROCEEDINGS OF THE IEEE, 2003, 91 (04) :489-502
[9]   Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element [J].
Burr, Geoffrey W. ;
Shelby, Robert M. ;
Sidler, Severin ;
di Nolfo, Carmelo ;
Jang, Junwoo ;
Boybat, Irem ;
Shenoy, Rohit S. ;
Narayanan, Pritish ;
Virwani, Kumar ;
Giacometti, Emanuele U. ;
Kuerdi, Bulent N. ;
Hwang, Hyunsang .
IEEE TRANSACTIONS ON ELECTRON DEVICES, 2015, 62 (11) :3498-3507
[10]  
Chen YH, 2016, ISSCC DIG TECH PAP I, V59, P262, DOI 10.1109/ISSCC.2016.7418007