IMPULSE: A 65-nm Digital Compute-in-Memory Macro With Fused Weights and Membrane Potential for Spike-Based Sequential Learning Tasks

被引:30
作者
Agrawal, Amogh [1 ]
Ali, Mustafa [1 ]
Koo, Minsuk [2 ]
Rathi, Nitin [1 ]
Jaiswal, Akhilesh [3 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47906 USA
[2] Incheon Natl Univ, Dept Comp Sci & Engn, Incheon 22012, South Korea
[3] Univ Southern Calif, Inst Informat Sci, Los Angeles, CA 90007 USA
来源
IEEE SOLID-STATE CIRCUITS LETTERS | 2021年 / 4卷
基金
美国国家科学基金会;
关键词
Compute-in-memory (CIM); neuromorphic computing; sentiment analysis; spiking neural network (SNN); SRAM;
D O I
10.1109/LSSC.2021.3092727
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The inherent dynamics of the neuron membrane potential in spiking neural networks (SNNs) allows the processing of sequential learning tasks, avoiding the complexity of recurrent neural networks. The highly sparse spike-based computations in such spatiotemporal data can be leveraged for energy efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. To that effect, we propose a 10T-SRAM compute-in-memory (CIM) macro, specifically designed for state-of-the-art SNN inference. It consists of a fused weight (W-MEM) and membrane potential (V-MEM) memory and inherently exploits sparsity in input spikes leading to similar to 97.4% reduction in energy-delay product (EDP) at 85% sparsity (typical of SNNs considered in this work) compared to the case of no sparsity. We propose staggered data mapping and reconfigurable peripherals for handling different bit precision requirements of W-MEM and V-MEM, while supporting multiple neuron functionalities. The proposed macro was fabricated in 65-nm CMOS technology, achieving energy efficiency of 0.99 TOPS/W at 0.85-V supply and 200-MHz frequency for signed 11-bit operations. We evaluate the SNN for sentiment classification from the IMDB dataset of movie reviews and achieve within similar to 1% accuracy difference and similar to 5x higher energy efficiency compared to a corresponding long short-term memory network.
引用
收藏
页码:137 / 140
页数:4
相关论文
共 12 条
[1]   Loihi: A Neuromorphic Manycore Processor with On-Chip Learning [J].
Davies, Mike ;
Srinivasa, Narayan ;
Lin, Tsung-Han ;
Chinya, Gautham ;
Cao, Yongqiang ;
Choday, Sri Harsha ;
Dimou, Georgios ;
Joshi, Prasad ;
Imam, Nabil ;
Jain, Shweta ;
Liao, Yuyun ;
Lin, Chit-Kwan ;
Lines, Andrew ;
Liu, Ruokun ;
Mathaikutty, Deepak ;
Mccoy, Steve ;
Paul, Arnab ;
Tse, Jonathan ;
Venkataramanan, Guruguhanathan ;
Weng, Yi-Hsin ;
Wild, Andreas ;
Yang, Yoonseok ;
Wang, Hong .
IEEE MICRO, 2018, 38 (01) :82-99
[2]  
Giraldo JSP, 2018, PROC EUR SOLID-STATE, P166, DOI 10.1109/ESSCIRC.2018.8494342
[3]   RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network [J].
Han, Bing ;
Srinivasan, Gopalakrishnan ;
Roy, Kaushik .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :13555-13564
[4]  
Jeloka Supreet, 2015, 2015 Symposium on VLSI Circuits (VLSI Circuits), pC272, DOI 10.1109/VLSIC.2015.7231285
[5]   StackNet: Stacking feature maps for Continual learning [J].
Kim, Jangho ;
Kim, Jeesoo ;
Kwak, Nojun .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :975-982
[6]  
Koo J., 2019, PROC IEEE CUSTOM INT, P1, DOI [10.1109/CICC.2019.8780165, DOI 10.1109/CICC.2019.8780165]
[7]  
Liu M. Y., 2017, ADV NEURAL INFORM PR, DOI DOI 10.48550/ARXIV.1703.00848
[8]  
Maas Andrew, 2011, P 49 ANN M ASS COMP, P142
[9]   SpinalFlow: An Architecture and Dataflow Tailored for Spiking Neural Networks [J].
Narayanan, Surya ;
Taht, Karl ;
Balasubramonian, Rajeev ;
Giacomin, Edouard ;
Gaillardon, Pierre-Emmanuel .
2020 ACM/IEEE 47TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2020), 2020, :349-362
[10]  
Rathi N., 2020, DIET-SNN: Direct input encoding with leakage and threshold optimization in deep spiking neural networks