An In-Memory-Computing Binary Neural Network Architecture With In-Memory Batch Normalization

被引:0
|
作者
Rege, Prathamesh Prashant [1 ]
Yin, Ming [2 ]
Parihar, Sanjay [3 ]
Versaggi, Joseph [2 ]
Nemawarkar, Shashank [3 ]
机构
[1] Northeastern Univ, Boston, MA 80305 USA
[2] GLOBALFOUNDRIES, Malta, NY 12020 USA
[3] GLOBALFOUNDRIES, Austin, TX 78735 USA
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Accuracy; Neural networks; Batch normalization; Convolutional neural networks; Training; Data models; Voltage control; In-memory computing; SRAM chips; binary neural network; edge device; in-memory computing; process variation; SRAM;
D O I
10.1109/ACCESS.2024.3444481
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes an in-memory computing architecture that combines full-precision computation for the first and last layers of a neural network while employing binary weights and input activations for the intermediate layers. This unique approach presents an efficient and effective solution for optimizing neural-network computations, reducing complexity, and enhancing energy efficiency. Notably, multiple architecture-level optimization methods are developed to ensure the binary operations thereby eliminating the need for intricate "digital logic" components external to the memory units. One of the key contributions of this study is in-memory batch normalization, which is implemented to provide good accuracy for CIFAR10 classification applications. Despite the inherent challenges posed by the process variations, the proposed design demonstrated an accuracy of 78%. Furthermore, the SRAM layer in the architecture showed an energy efficiency of 1086 TOPS/W and throughput of 23 TOPS, all packed efficiently within an area of 60 TOPS/mm2. This novel in-memory computing architecture offers a promising solution for next-generation efficient and high-performance deep learning applications.
引用
收藏
页码:190889 / 190896
页数:8
相关论文
共 50 条
  • [31] Device and Circuit Architectures for In-Memory Computing
    Ielmini, Daniele
    Pedretti, Giacomo
    ADVANCED INTELLIGENT SYSTEMS, 2020, 2 (07)
  • [32] Scalable and Programmable Neural Network Inference Accelerator Based on In-Memory Computing
    Jia, Hongyang
    Ozatay, Murat
    Tang, Yinqi
    Valavi, Hossein
    Pathak, Rakshit
    Lee, Jinseok
    Verma, Naveen
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2022, 57 (01) : 198 - 211
  • [33] A Compressed Spiking Neural Network Onto a Memcapacitive In-Memory Computing Array
    Oshio, Reon
    Sugahara, Takuya
    Sawada, Atsushi
    Kimura, Mutsumi
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEEE MICRO, 2024, 44 (01) : 8 - 16
  • [34] STT-BSNN: An In-Memory Deep Binary Spiking Neural Network Based on STT-MRAM
    Van-Tinh Nguyen
    Quang-Kien Trinh
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEEE ACCESS, 2021, 9 (09): : 151373 - 151385
  • [35] Reconfigurable In-Memory Computing with Resistive Memory Crossbar
    Zha, Yue
    Li, Jing
    2016 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2016,
  • [36] XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks
    Yin, Shihui
    Jiang, Zhewei
    Seo, Jae-Sun
    Seok, Mingoo
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (06) : 1733 - 1743
  • [37] In-memory computing with emerging nonvolatile memory devices
    Caidie Cheng
    Pek Jun Tiw
    Yimao Cai
    Xiaoqin Yan
    Yuchao Yang
    Ru Huang
    Science China Information Sciences, 2021, 64
  • [38] XNOR-BSNN: In-Memory Computing Model for Deep Binarized Spiking Neural Network
    Nguyen, Van-Tinh
    Quang-Kien Trinh
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2021 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2021, : 17 - 21
  • [39] Device Variation Effects on Neural Network Inference Accuracy in Analog In-Memory Computing Systems
    Wang, Qiwen
    Park, Yongmo
    Lu, Wei D.
    ADVANCED INTELLIGENT SYSTEMS, 2022, 4 (08)
  • [40] Error Resilient In-Memory Computing Architecture for CNN Inference on the Edge
    Rios, Marco
    Ponzina, Flavio
    Ansaloni, Giovanni
    Levisse, Alexandre
    Atienza, David
    PROCEEDINGS OF THE 32ND GREAT LAKES SYMPOSIUM ON VLSI 2022, GLSVLSI 2022, 2022, : 249 - 254