A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

被引:5
|
作者
Jeong, Hoichang [1 ]
Kim, Seungbin [2 ]
Park, Keonhee [2 ]
Jung, Jueun [1 ]
Lee, Kyuho Jason [3 ]
机构
[1] Ulsan Natl Inst Sci & Technol, Dept Elect Engn, Ulsan 44919, South Korea
[2] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Ulsan 44919, South Korea
[3] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Dept Elect Engn, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Computer architecture; Throughput; Neural networks; Linearity; Energy efficiency; Common Information Model (computing); Transistors; SRAM; computing-in-memory (CIM); processing-in-memory (PIM); ternary neural network (TNN); analog computing; SRAM MACRO; COMPUTATION; BINARY;
D O I
10.1109/TCSII.2023.3265064
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
引用
收藏
页码:1739 / 1743
页数:5
相关论文
共 30 条
  • [1] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Chang, Liang
    Li, Chenglong
    Zhang, Zhaomin
    Xiao, Jianbiao
    Liu, Qingsong
    Zhu, Zhen
    Li, Weihang
    Zhu, Zixuan
    Yang, Siqi
    Zhou, Jun
    SCIENCE CHINA-INFORMATION SCIENCES, 2021, 64 (06)
  • [2] Spatial-Temporal Hybrid Neural Network With Computing-in-Memory Architecture
    Bai, Kangjun
    Liu, Lingjia
    Yi, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (07) : 2850 - 2862
  • [3] TGBNN: Training Algorithm of Binarized Neural Network With Ternary Gradients for MRAM-Based Computing-in-Memory Architecture
    Fujiwara, Yuya
    Kawahara, Takayuki
    IEEE ACCESS, 2024, 12 : 150962 - 150974
  • [4] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Liang CHANG
    Chenglong LI
    Zhaomin ZHANG
    Jianbiao XIAO
    Qingsong LIU
    Zhen ZHU
    Weihang LI
    Zixuan ZHU
    Siqi YANG
    Jun ZHOU
    Science China(Information Sciences), 2021, 64 (06) : 45 - 59
  • [5] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Liang Chang
    Chenglong Li
    Zhaomin Zhang
    Jianbiao Xiao
    Qingsong Liu
    Zhen Zhu
    Weihang Li
    Zixuan Zhu
    Siqi Yang
    Jun Zhou
    Science China Information Sciences, 2021, 64
  • [6] Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators
    Jiang, Weiwen
    Lou, Qiuwen
    Yan, Zheyu
    Yang, Lei
    Hu, Jingtong
    Hu, Xiaobo Sharon
    Shi, Yiyu
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (04) : 595 - 605
  • [7] Cryogenic Operation of Computing-In-Memory based Spiking Neural Network
    Shamieh, Laith A.
    Wang, Wei-Chun
    Zhang, Shida
    Saligram, Rakshith
    Gaidhane, Amol D.
    Cao, Yu
    Raychowdhury, Arijit
    Datta, Suman
    Mukhopadhyay, Saibal
    PROCEEDINGS OF THE 29TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2024, 2024,
  • [8] Extreme Partial-Sum Quantization for Analog Computing-In-Memory Neural Network Accelerators
    Kim, Yulhwa
    Kim, Hyungjun
    Kim, Jae-Joon
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (04)
  • [9] A Novel 8T XNOR-SRAM: Computing-in-Memory Design for Binary/Ternary Deep Neural Networks
    Alnatsheh, Nader
    Kim, Youngbae
    Cho, Jaeik
    Choi, Kyuwon Ken
    ELECTRONICS, 2023, 12 (04)
  • [10] An In-Memory-Computing Binary Neural Network Architecture With In-Memory Batch Normalization
    Rege, Prathamesh Prashant
    Yin, Ming
    Parihar, Sanjay
    Versaggi, Joseph
    Nemawarkar, Shashank
    IEEE ACCESS, 2024, 12 : 190889 - 190896