Design and implementation of a charge-sharing in-memory-computing macro with sparse feature for quantized neural network

被引:0
|
作者
Liu, Yihe [1 ]
Wang, Junjie [1 ]
Liu, Shuang [1 ]
Sun, Mingyuan [2 ]
Zhang, Xiaoyang [3 ]
Zhou, Jingtao [1 ]
Yan, Shiqin [1 ]
Pan, Ruicheng [1 ]
Hu, Hao [1 ]
Liu, Yang [1 ]
机构
[1] Univ Elect Sci & Technol China, State Key Lab Thin Solid Films & Integrated Device, 2006 Xiyuan Ave, Chengdu 610054, Sichuan, Peoples R China
[2] Chang Feng Mech & Elect Technol Acad, Beijing 100089, Peoples R China
[3] Beijing Inst Remote Sensing Equipment, Sci & Technol Millimeter Wave Lab, Beijing 100854, Peoples R China
关键词
In memory computing; Artificial intelligence; High energy efficiency; SRAM MACRO; COMPUTATION;
D O I
10.1016/j.mejo.2024.106470
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the rapid development of artificial intelligence technology, in-memory computing has become a research hotspot. In this article, we propose an in-memory computing (IMC) architecture that achieves high energy efficiency and performance. Our work is based on the working mechanism of charge sharing, enabling configurable multi-bit Multiply-Accumulate operations. This work employs a unique bit-cell structure to implement sparse strategies at the bit-level in IMC arrays and compensates for errors caused by non-ideal effects, thus achieving better energy efficiency and performance. A hardware-aware quantification method and a hardware simulation model based on Pytorch have been proposed to evaluate the hardware mapping and compare with other charge domain IMC works. The MNIST and CIFAR-10 datasets have been used to validate algorithm models and chip performance, achieving accuracy rates of 97.6% and 90.5% respectively. The IMC chip was fabricated with a 180 nm CMOS process. The measurement shows that the chip achieves an energy efficiency of 41.8 TOPS/W.
引用
收藏
页数:11
相关论文
共 14 条
  • [1] An In-Memory-Computing STT-MRAM Macro with Analog ReLU and Pooling Layers for Ultra-High Efficient Neural Network
    Jiang, Linjun
    Sun, Sifan
    Ge, Jinming
    Zhang, He
    Kang, Wang
    2023 IEEE 12TH NON-VOLATILE MEMORY SYSTEMS AND APPLICATIONS SYMPOSIUM, NVMSA, 2023, : 44 - 49
  • [2] An In-Memory-Computing Binary Neural Network Architecture With In-Memory Batch Normalization
    Rege, Prathamesh Prashant
    Yin, Ming
    Parihar, Sanjay
    Versaggi, Joseph
    Nemawarkar, Shashank
    IEEE ACCESS, 2024, 12 : 190889 - 190896
  • [3] A Charge-Sharing based 8T SRAM In-Memory Computing for Edge DNN Acceleration
    Lee, Kyeongho
    Cheon, Sungsoo
    Jo, Joongho
    Choi, Woong
    Park, Jongsun
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 739 - 744
  • [4] An In-Memory Computing SRAM Macro for Memory-Augmented Neural Network
    Kim, Sunghoon
    Lee, Wonjae
    Kim, Sundo
    Park, Sungjin
    Jeon, Dongsuk
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (03) : 1687 - 1691
  • [5] A 8-h-Precision 6T SRAM Computing-in-Memory Macro Using Segmented-Bitline Charge-Sharing Scheme for AI Edge Chips
    Su, Jian-Wei
    Chou, Yen-Chi
    Liu, Ruhui
    Liu, Ta-Wei
    Lu, Pei-Jung
    Wu, Ping-Chun
    Chung, Yen-Lin
    Hong, Li-Yang
    Ren, Jin-Sheng
    Pan, Tianlong
    Jhang, Chuan-Jia
    Huang, Wei-Hsing
    Chien, Chih-Han
    Mei, Peng-, I
    Li, Sih-Han
    Sheu, Shyh-Shyuan
    Chang, Shih-Chieh
    Lo, Wei-Chung
    Wu, Chih-, I
    Si, Xin
    Lo, Chung-Chuan
    Liu, Ren-Shuo
    Hsieh, Chih-Cheng
    Tang, Kea-Tiong
    Chang, Meng-Fan
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (03) : 877 - 892
  • [6] A Charge-domain 10T SRAM based In-Memory-Computing Macro for Low Energy and Highly Accurate DNN inference
    Kim, Joonhyung
    Park, Jongsun
    18TH INTERNATIONAL SOC DESIGN CONFERENCE 2021 (ISOCC 2021), 2021, : 89 - 90
  • [7] Trends and Challenges in Computing-in-Memory for Neural Network Model: A Review From Device Design to Application-Side Optimization
    Yu, Ke
    Kim, Sunmean
    Choi, Jun Rim
    IEEE ACCESS, 2024, 12 : 186679 - 186702
  • [8] AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
    Benmeziane, Hadjer
    Lammie, Corey
    Boybat, Irem
    Rasch, Malte
    Le Gallo, Manuel
    Tsai, Hsinyu
    Muralidhar, Ramachandran
    Niar, Smail
    Hamza, Ouarnoughi
    Narayanan, Vijay
    Sebastian, Abu
    El Maghraoui, Kaoutar
    2023 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING AND COMMUNICATIONS, EDGE, 2023, : 233 - 244
  • [9] Efficient Discrete Temporal Coding Spike-Driven In-Memory Computing Macro for Deep Neural Network Based on Nonvolatile Memory
    Han, Lixia
    Huang, Peng
    Wang, Yijiao
    Zhou, Zheng
    Zhang, Yizhou
    Liu, Xiaoyan
    Kang, Jinfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (11) : 4487 - 4498
  • [10] Hadamard product-based in-memory computing design for floating point neural network training
    Fan, Anjunyi
    Fu, Yihan
    Tao, Yaoyu
    Jin, Zhonghua
    Han, Haiyue
    Liu, Huiyu
    Zhang, Yaojun
    Yan, Bonan
    Yang, Yuchao
    Huang, Ru
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2023, 3 (01):