Energy-Efficient Time-Domain Vector-by-Matrix Multiplier for Neurocomputing and Beyond

被引:44
作者
Bavandpour, Mohammad [1 ]
Mahmoodi, Mohammad Reza [1 ]
Strukov, Dmitri B. [1 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
关键词
Time-domain computing; floating gate memory; vector matrix multiplication; neuromorphic computing;
D O I
10.1109/TCSII.2019.2891688
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose an extremely energy-efficient mixed-signal N x N vector-by-matrix multiplication (VMM) in a time domain. Multi-bit inputs/outputs are represented with time-encoded digital signals, while multi-bit matrix weights are realized with adjustable current sources, e.g., transistors biased in subthreshold regime. The major advantage of the proposed approach over other types of mixed-signal implementations is very compact peripheral circuits, which would be essential for achieving high energy efficiency and speed at the system level. As a case study, we have designed a multilayer perceptron, based on two layers of 10 x 10 four-quadrant multipliers, in 55-nm process with embedded NOR flash memory technology, which allows for compact implementation of adjustable current sources. Our analysis, based on memory cell measurements, shows that >6 bit operation can be ensured for larger (N > 50) VMMs. Post-layout estimates for 55-nm 6-bit VMM, which take into account the impact of PVT variations, noise, and overhead of I/O circuitry for converting between conventional digital and time domain representations, show similar to 7 fJ/Op for N > 500. The energy efficiency can be further improved to POp/J regime for more optimal and aggressive designs.
引用
收藏
页码:1512 / 1516
页数:5
相关论文
共 14 条
[1]  
[Anonymous], 2017, ARXIV171110673
[2]  
[Anonymous], 2016, ARXIV160607786
[3]  
[Anonymous], 2018, P IEDM
[4]  
[Anonymous], P IEDM
[5]   Reviewing the Evolution of the NAND Flash Technology [J].
Compagnoni, Christian Monzio ;
Goda, Akira ;
Spinelli, Alessandro S. ;
Feeley, Peter ;
Lacaita, Andrea L. ;
Visconti, Angelo .
PROCEEDINGS OF THE IEEE, 2017, 105 (09) :1609-1633
[6]   Dot-Product Engine for Neuromorphic Computing: Programming 1T1M Crossbar to Accelerate Matrix-Vector Multiplication [J].
Hu, Miao ;
Strachan, John Paul ;
Li, Zhiyong ;
Grafals, Emmanuelle M. ;
Davila, Noraica ;
Graves, Catherine ;
Lam, Sity ;
Ge, Ning ;
Yang, Jianhua ;
Williams, R. Stanley .
2016 ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2016,
[7]  
Lee EH, 2016, ISSCC DIG TECH PAP I, V59, P418, DOI 10.1109/ISSCC.2016.7418085
[8]  
Madhavan A., 2017, PROC CICC, P1
[9]   Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator [J].
Marinella, Matthew J. ;
Agarwal, Sapan ;
Hsia, Alexander ;
Richter, Isaac ;
Jacobs-Gedrim, Robin ;
Niroula, John ;
Plimpton, Steven J. ;
Ipek, Engin ;
James, Conrad D. .
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2018, 8 (01) :86-101
[10]   A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology With Time-Domain Analog and Digital Mixed-Signal Processing [J].
Miyashita, Daisuke ;
Kousai, Shouhei ;
Suzuki, Tomoya ;
Deguchi, Jun .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (10) :2679-2689