An Area and Energy-Efficient SRAM Based Time - Domain Compute-In-Memory Architecture For BNN

被引:0
作者
Chakraborty, Subhradip [1 ]
Kushwaha, Dinesh [2 ]
Bulusu, Anand [2 ]
Dasgupta, Sudeb [2 ]
机构
[1] Rajiv Gandhi Inst Petr Technol, Amethi, India
[2] Indian Inst Technol IIT Roorkee, Roorkee, India
来源
2024 IEEE 6TH INTERNATIONAL CONFERENCE ON AI CIRCUITS AND SYSTEMS, AICAS 2024 | 2024年
关键词
Analog; binary neural network (BNN); compute in-memory (CIM); energy efficiency; time domain computing; MACRO;
D O I
10.1109/AICAS59952.2024.10595875
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present an area and energy efficient SRAM based time-domain compute-in-memory (TDCIM) architecture to accelerate binary neural networks (BNNs). We have proposed an area efficient time-domain multiplication bitcell (MC). The proposed MC is 1.5x times area efficient than state-of-the- art. The proposed TDCIM focuses on improving the energy efficiency of the multiplication and accumulation (MAC) operation. It achieves 1.6x higher energy efficiency compared to state of the art. The work uses a time domain computing approach by introducing a specific delay which is equivalent to the MAC operation performed. A proposed robust 10T1C SRAM bit cell has been used to perform in-memory XNOR operation. A time to digital converter (TDC) has been used instead of analog to digital converter (ADC) to save the power consumption. The designed architecture has been implemented using a 45 nm CMOS technology, resulting in the development of a 25x32 SRAM CIM macro. Post layout simulation results demonstrate the proposed architecture achieved an energy efficiency of 877.32 TOPS/W at VDD of 0.9 V. The architecture achieves an inference accuracy of 98.68% on the MNIST dataset.
引用
收藏
页码:184 / 188
页数:5
相关论文
共 18 条
[1]  
Albawi S, 2017, I C ENG TECHNOL
[2]   A Mixed-signal Time-Domain Generative Adversarial Network Accelerator with Efficient Subthreshold Time Multiplier and Mixed-signal On-chip Training for Low Power Edge Devices [J].
Chen, Zhengyu ;
Fu, Sihua ;
Cao, Qiankai ;
Gu, Jie .
2020 IEEE SYMPOSIUM ON VLSI CIRCUITS, 2020,
[3]  
Courbariaux M., 2016, BINARIZED NEURAL NET
[4]   An Energy-Efficient One-Shot Time-Based Neural Network Accelerator Employing Dynamic Threshold Error Correction in 65 nm [J].
Everson, Luke R. ;
Liu, Muqing ;
Pande, Nakul ;
Kim, Chris H. .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2019, 54 (10) :2777-2785
[5]   Analog-to-Digital Converter Design Exploration for Compute-in-Memory Accelerators [J].
Jiang, Hongwu ;
Li, Wantong ;
Huang, Shanshi ;
Cosemans, Stefan ;
Catthoor, Francky ;
Yu, Shimeng .
IEEE DESIGN & TEST, 2022, 39 (02) :48-55
[6]  
Kushwaha D., 2023, 2023 IEEE 5 INT C AR, P1
[7]   An Energy-Efficient Multi-bit Current-based Analog Compute-In-Memory Architecture and design Methodology [J].
Kushwaha, Dinesh ;
Joshi, Ashish ;
Gupta, Neha ;
Sharma, Aditya ;
Miryala, Sandeep ;
Joshi, Rajiv V. ;
Dasgupta, S. ;
Bulusu, Anand .
2023 36TH INTERNATIONAL CONFERENCE ON VLSI DESIGN AND 2023 22ND INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS, VLSID, 2023, :359-364
[8]   A 65nm Compute-In-Memory 7T SRAM Macro Supporting 4-bit Multiply and Accumulate Operation by Employing Charge Sharing [J].
Kushwaha, Dinesh ;
Sharma, Aditya ;
Gupta, Neha ;
Raj, Ritik ;
Joshi, Ashish ;
Mishra, Jwalant ;
Kohli, Rajat ;
Miryala, Sandeep ;
Joshi, Rajiv ;
Dasgupta, Sudeb ;
Bulusu, Anand .
2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, :1556-1560
[9]   An Energy-Efficient High CSNR XNOR and Accumulation Scheme For BNN [J].
Kushwaha, Dinesh ;
Joshi, Ashish ;
Kumar, Chaudhry Indra ;
Gupta, Neha ;
Miryala, Sandeep ;
Joshi, Rajiv, V ;
Dasgupta, Sudeb ;
Bulusu, Anand .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (04) :2311-2315
[10]  
Liu MQ, 2017, IEEE CUST INTEGR CIR