Design of Processing-in-Memory With Triple Computational Path and Sparsity Handling for Energy-Efficient DNN Training

被引:1
|
作者
Han, Wontak [1 ]
Heo, Jaehoon [1 ]
Kim, Junsoo [1 ]
Lim, Sukbin [1 ]
Kim, Joo-Young [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Elect Engn, Daejeon 34141, South Korea
关键词
Training; Computational modeling; Computer architecture; Deep learning; Circuits and systems; Power demand; Neurons; Accelerator architecture; machine learning; processing-in-memory architecture; bit-serial operation; inference; training; sparsity handling; SRAM; energy-efficient architecture; DEEP NEURAL-NETWORKS; SRAM; ACCELERATOR; MACRO;
D O I
10.1109/JETCAS.2022.3168852
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As machine learning (ML) and artificial intelligence (AI) have become mainstream technologies, many accelerators have been proposed to cope with their computation kernels. However, they access the external memory frequently due to the large size of deep neural network model, suffering from the von Neumann bottleneck. Moreover, as privacy issue is becoming more critical, on-device training is emerging as its solution. However, on-device training is challenging because it should perform the training under a limited power budget, which requires a lot more computations and memory accesses than the inference. In this paper, we present an energy-efficient processing-in-memory (PIM) architecture supporting end-to-end on-device training named T-PIM. Its macro design includes an 8T-SRAM cell-based PIM block to compute in-memory AND operation and three computational datapaths for end-to-end training. Each of three computational paths integrates arithmetic units for forward propagation, backward propagation, and gradient calculation and weight update, respectively, allowing the weight data stored in the memory stationary. T-PIM also supports variable bit precision to cover various ML scenarios. It can use fully variable input bit precision and 2-bit, 4-bit, 8-bit, and 16-bit weight bit precision for the forward propagation and the same input bit precision and 16-bit weight bit precision for the backward propagation. In addition, T-PIM implements sparsity handling schemes that skip the computation for input data and turn off the arithmetic units for weight data to reduce both unnecessary computations and leakage power. Finally, we fabricate the T-PIM chip on a 5.04mm(2) die in a 28-nm CMOS logic process. It operates at 50-280MHz with the supply voltage of 0.75-1.05V, dissipating 5.25-51.23mW power in inference and 6.10-37.75mW in training. As a result, it achieves 17.90-161.08TOPS/W energy efficiency for the inference of 1-bit activation and 2-bit weight data, and 0.84-7.59TOPS/W for the training of 8-bit activation/error and 16-bit weight data. In conclusion, T-PIM is the first PIM chip that supports end-to-end training, demonstrating 2.02 times performance improvement over the latest PIM that partially supports training.
引用
收藏
页码:354 / 366
页数:13
相关论文
共 31 条
  • [1] T-PIM: An Energy-Efficient Processing-in-Memory Accelerator for End-to-End On-Device Training
    Heo, Jaehoon
    Kim, Junsoo
    Lim, Sukbin
    Han, Wontak
    Kim, Joo-Young
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (03) : 600 - 613
  • [2] Z-PIM: A Sparsity-Aware Processing-in-Memory Architecture With Fully Variable Weight Bit-Precision for Energy-Efficient Deep Neural Networks
    Kim, Ji-Hoon
    Lee, Juhyoung
    Lee, Jinsu
    Heo, Jaehoon
    Kim, Joo-Young
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2021, 56 (04) : 1093 - 1104
  • [3] ReverSearch: Search-based energy-efficient Processing-in-Memory Architecture
    Li, Weihang
    Chang, Liang
    Fan, Jiajing
    Zhao, Xin
    Zhang, Hengtan
    Lin, Shuisheng
    Zhou, Jun
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 409 - 413
  • [4] Energy-Efficient DNN Training Processorson Micro-AI Systems
    Han, Donghyeon
    Kang, Sanghoon
    Kim, Sangyeob
    Lee, Juhyoung
    Yoo, Hoi-Jun
    IEEE OPEN JOURNAL OF THE SOLID-STATE CIRCUITS SOCIETY, 2022, 2 : 259 - 275
  • [5] The Hardware and Algorithm Co-Design for Energy-Efficient DNN Processor on Edge/Mobile Devices
    Lee, Jinsu
    Kang, Sanghoon
    Lee, Jinmook
    Shin, Dongjoo
    Han, Donghyeon
    Yoo, Hoi-Jun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (10) : 3458 - 3470
  • [6] Quant-PIM: An Energy-Efficient Processing-in-Memory Accelerator for Layerwise Quantized Neural Networks
    Lee, Young Seo
    Chung, Eui-Young
    Gong, Young-Ho
    Chung, Sung Woo
    IEEE EMBEDDED SYSTEMS LETTERS, 2021, 13 (04) : 162 - 165
  • [7] PIMSR: An Energy-Efficient Processing-in-Memory Accelerator for 60 FPS 4K Super-Resolution
    Guan, Juntao
    Guo, Qinghui
    Li, Huanan
    Lai, Rui
    Ding, Ruixue
    Qian, Libo
    Zhu, Zhangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2025, 72 (04) : 623 - 627
  • [8] TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning
    Kim, Sangyeob
    Lee, Juhyoung
    Kang, Sanghoon
    Han, Donghyeon
    Jo, Wooyoung
    Yoo, Hoi-Jun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (04) : 1494 - 1506
  • [9] DDC-PIM: Efficient Algorithm/Architecture Co-Design for Doubling Data Capacity of SRAM-Based Processing-in-Memory
    Duan, Cenlin
    Yang, Jianlei
    He, Xiaolin
    Qi, Yingjie
    Wang, Yikun
    Wang, Yiou
    He, Ziyan
    Yan, Bonan
    Wang, Xueyan
    Jia, Xiaotao
    Pan, Weitao
    Zhao, Weisheng
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (03) : 906 - 918
  • [10] Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design
    Choi, Sunghyun
    Lew, Dongwoo
    Park, Jongsun
    IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 2022, 16 (03) : 442 - 455