An Energy-Efficient Time Domain Based Compute In-Memory Architecture for Binary Neural Network

被引:0
作者
Chakraborty, Subhradip [1 ]
Kushwaha, Dinesh [2 ]
Goel, Abhishek [2 ]
Singla, Anmol [3 ]
Bulusu, Anand [2 ]
Dasgupta, Sudeb [2 ]
机构
[1] RGIPT Uttar Pradesh, EE Engn Dept, Jais, India
[2] Indian Inst Technol, Elect & Commun Engn Dept, Roorkee, Uttar Pradesh, India
[3] NIT Uttarakhand, ECE Dept, Srinagar, India
来源
2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024 | 2024年
关键词
Analog; binary neural network (BNN); compute in-memory (CIM); energy efficiency; time domain computing;
D O I
10.1109/ISQED60706.2024.10528729
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents an energy-efficient time domain-based compute in-memory architecture to accelerate the deep neural networks (DNNs). This work focuses on improving the energy efficiency of the multiplication and accumulation (MAC) operation by performing it within the memory cell itself. The proposed approach utilizes time domain computing, which involves introducing a specific delay to a reference signal to perform MAC operations. To convert the time domain signal into a digital form, a time-to-digital converter (TDC) is employed. A 12T time domain-based bit cell generates the necessary delay, while a flash type TDC is used for time to digital conversion. The designed architecture has been implemented using a 45 nm CMOS technology, resulting in the development of a 128x64 SRAM CIM macro. Simulation results demonstrate that the proposed architecture achieved an energy efficiency of 941 TOPS/W at a frequency of 0.5 MHz and 1 V, which is 1.75x higher than the state-of-the-art. Furthermore, the system attained an inference accuracy of 96.7% and 84.52% when tested on the MNIST and CIFAR- 10 datasets, respectively.
引用
收藏
页数:6
相关论文
共 21 条
[11]  
Liu MQ, 2017, IEEE CUST INTEGR CIR
[12]   A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology With Time-Domain Analog and Digital Mixed-Signal Processing [J].
Miyashita, Daisuke ;
Kousai, Shouhei ;
Suzuki, Tomoya ;
Deguchi, Jun .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (10) :2679-2689
[13]  
Rawat W, 2017, NEURAL COMPUT, V29, P2352, DOI [10.1162/neco_a_00990, 10.1162/NECO_a_00990]
[14]   Memory devices and applications for in-memory computing [J].
Sebastian, Abu ;
Le Gallo, Manuel ;
Khaddam-Aljameh, Riduan ;
Eleftheriou, Evangelos .
NATURE NANOTECHNOLOGY, 2020, 15 (07) :529-544
[15]   A Dual-Split 6T SRAM-Based Computing-in-Memory Unit-Macro With Fully Parallel Product-Sum Operation for Binarized DNN Edge Processors [J].
Si, Xin ;
Khwa, Win-San ;
Chen, Jia-Jing ;
Li, Jia-Fang ;
Sun, Xiaoyu ;
Liu, Rui ;
Yu, Shimeng ;
Yamauchi, Hiroyuki ;
Li, Qiang ;
Chang, Meng-Fan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2019, 66 (11) :4172-4185
[16]   TD-SRAM: Time-Domain-Based In-Memory Computing Macro for Binary Neural Networks [J].
Song, Jiahao ;
Wang, Yuan ;
Guo, Minguang ;
Ji, Xiang ;
Cheng, Kaili ;
Hu, Yixuan ;
Tang, Xiyuan ;
Wang, Runsheng ;
Huang, Ru .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (08) :3377-3387
[17]  
Soudry D., 2016, ARXIV
[18]  
Wu P. -C., 2022, 2022 IEEE INT SOL ST, P1
[19]  
Yang J, 2019, ISSCC DIG TECH PAP I, V62, P394, DOI 10.1109/ISSCC.2019.8662435
[20]   Compute-in-Memory Chips for Deep Learning: Recent Trends and Prospects [J].
Yu, Shimeng ;
Jiang, Hongwu ;
Huang, Shanshi ;
Peng, Xiaochen ;
Lu, Anni .
IEEE CIRCUITS AND SYSTEMS MAGAZINE, 2021, 21 (03) :31-56