Flash based In-Memory Multiply-Accumulate Realisation: A Theoretical Study

被引:0
作者
Balagopal, Ashwin S. [1 ]
Viraraghavan, Janakiraman [1 ]
机构
[1] Indian Inst Technol Madras, Dept Elect Engn, Integrated Circuits & Syst Grp, Chennai 600036, Tamil Nadu, India
来源
2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) | 2020年
关键词
NEURAL-NETWORKS; SRAM;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In memory computing is gaining traction as a technique to implement the Multiply Accumulate (MAC) operation on edge network devices, to perform neural network inference while reducing energy expended in memory-fetch. The voltage developed along a bit-line is an analog representation of the MAC value and needs to be digitized for further processing. In this paper we propose to use the Sense Amp as a comparator to perform the digitization using a serial flash, implemented in memory. Flash ADCs require an ordered set of reference voltages to compare against the input to be digitized. Recognizing that the MAC value is non-uniformly distributed and is application specific we propose an algorithm to generate the reference voltages tailored to the MAC distribution function. Further, we show that the reference voltage can be generated in much the same way as the MAC voltage is generated along a column, in-memory. We provide an algorithm to populate the bit-cells of the reference column to generate the appropriate reference voltage. Experiments on the MNIST,SVHN and CIFAR-10 data sets show that the proposed technique results in a worst case accuracy reduction of 0.8% compared to the Double-Precision evaluation.
引用
收藏
页数:5
相关论文
共 19 条
[1]  
[Anonymous], 2011, PROC NEURAL INF PRO
[2]   CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks [J].
Biswas, Avishek ;
Chandrakasan, Anantha P. .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2019, 54 (01) :217-230
[3]   Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks [J].
Chen, Yu-Hsin ;
Krishna, Tushar ;
Emer, Joel S. ;
Sze, Vivienne .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) :127-138
[4]   DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning [J].
Chen, Yunji ;
Chen, Tianshi ;
Xu, Zhiwei ;
Sun, Ninghui ;
Temam, Olivier .
COMMUNICATIONS OF THE ACM, 2016, 59 (11) :105-112
[5]  
Danial L., 2019, IEEE INT SYMP CIRC S, P1
[6]   A 4+2T SRAM for Searching and In-Memory Computing With 0.3-V VDDmin [J].
Dong, Qing ;
Jeloka, Supreet ;
Saligane, Mehdi ;
Kim, Yejoong ;
Kawaminami, Masaru ;
Harada, Akihiko ;
Miyoshi, Satoru ;
Yasuda, Makoto ;
Blaauw, David ;
Sylvester, Dennis .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2018, 53 (04) :1006-1015
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]   Deep Neural Networks for Acoustic Modeling in Speech Recognition [J].
Hinton, Geoffrey ;
Deng, Li ;
Yu, Dong ;
Dahl, George E. ;
Mohamed, Abdel-rahman ;
Jaitly, Navdeep ;
Senior, Andrew ;
Vanhoucke, Vincent ;
Patrick Nguyen ;
Sainath, Tara N. ;
Kingsbury, Brian .
IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) :82-97
[9]  
Horowitz M, 2014, ISSCC DIG TECH PAP I, V57, P10, DOI 10.1109/ISSCC.2014.6757323
[10]   A 28 nm Configurable Memory (TCAM/BCAM/SRAM) Using Push-Rule 6T Bit Cell Enabling Logic-in-Memory [J].
Jeloka, Supreet ;
Akesh, Naveen Bharathwaj ;
Sylvester, Dennis ;
Blaauw, David .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2016, 51 (04) :1009-1021