MINT: Mixed-precision RRAM-based IN-memory Training Architecture

被引:27
|
作者
Jiang, Hongwu [1 ]
Huang, Shanshi [1 ]
Peng, Xiaochen [1 ]
Yu, Shimeng [1 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
RRAM; compute-in-memory; deep neural network; hardware accelerator;
D O I
10.1109/iscas45731.2020.9181020
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
On-chip training of large-scale deep neural networks (DNNs) is challenging. To solve the memory wall problem, compute-in-memory (CIM) is a promising approach that exploits the analog computation inside the memory array to speed up the vector-matrix multiplication (VMM). Challenges for on-chip CIM training include higher weight precision and higher analog-to-digital converter (ADC) resolution. In this work, we propose a mixed-precision RRAM-based CIM architecture that overcomes these challenges and supports on-chip training. In particular, we split the multi-bit weight into the most significant bits (MSBs) and the least significant bits (LSBs). The forward and backward propagations are performed with CIM transposable arrays for MSBs only, while the weight update is performed in regular memory arrays that store LSBs. Impact of ADC resolution on training accuracy is analyzed. We explore the training performance of a convolutional VGG-like network on the CIFAR-10 dataset using this Mixed-precision IN-memory Training architecture, namely MINT, showing that it can achieve similar to 91% accuracy under hardware constraints and similar to 4.46TOPS/W energy efficiency. Compared with the baseline CIM architectures based on RRAM, it can achieve 1.35x higher energy efficiency and only 31.9% chip size (similar to 98.86 mm(2) at 32 nm node).
引用
收藏
页数:5
相关论文
共 50 条
  • [1] RRAM-based Reconfigurable In-Memory Computing Architecture with Hybrid Routing
    Zha, Yue
    Li, Jing
    2017 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2017, : 527 - 532
  • [2] Mixed-precision in-memory computing
    Manuel Le Gallo
    Abu Sebastian
    Roland Mathis
    Matteo Manica
    Heiner Giefers
    Tomas Tuma
    Costas Bekas
    Alessandro Curioni
    Evangelos Eleftheriou
    Nature Electronics, 2018, 1 : 246 - 253
  • [3] Mixed-precision in-memory computing
    Le Gallo, Manuel
    Sebastian, Abu
    Mathis, Roland
    Manica, Matteo
    Giefers, Heiner
    Tuma, Tomas
    Bekas, Costas
    Curioni, Alessandro
    Eleftheriou, Evangelos
    NATURE ELECTRONICS, 2018, 1 (04): : 246 - 253
  • [4] RRAM-based Analog In-Memory Computing
    Chen, Xiaoming
    Song, Tao
    Han, Yinhe
    2021 IEEE/ACM INTERNATIONAL SYMPOSIUM ON NANOSCALE ARCHITECTURES (NANOARCH), 2021,
  • [5] Logic Synthesis for RRAM-Based In-Memory Computing
    Shirinzadeh, Saeideh
    Soeken, Mathias
    Gaillardon, Pierre-Emmanuel
    Drechsler, Rolf
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (07) : 1422 - 1435
  • [6] Mixed-precision architecture based on computational memory for training deep neural networks
    Nandakumar, S. R.
    Le Gallo, Manuel
    Boybat, Irem
    Rajendran, Bipin
    Sebastian, Abu
    Eleftheriou, Evangelos
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [7] A Flexible and Reliable RRAM-Based In-Memory Computing Architecture for Data-Intensive Applications
    Eslami, Nima
    Moaiyeri, Mohammad Hossein
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (03) : 736 - 748
  • [8] Robust RRAM-based In-Memory Computing in Light of Model Stability
    Krishnan, Gokul
    Sun, Jingbo
    Hazra, Jubin
    Du, Xiaocong
    Liehr, Maximilian
    Li, Zheng
    Beckmann, Karsten
    Joshi, Rajiv, V
    Cady, Nathaniel C.
    Cao, Yu
    2021 IEEE INTERNATIONAL RELIABILITY PHYSICS SYMPOSIUM (IRPS), 2021,
  • [9] RRAM-Based In-Memory Computing for Embedded Deep Neural Networks
    Bankman, D.
    Messner, J.
    Gural, A.
    Murmann, B.
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1511 - 1515
  • [10] TIME: A Training-in-Memory Architecture for RRAM-Based Deep Neural Networks
    Cheng, Ming
    Xia, Lixue
    Zhu, Zhenhua
    Cai, Yi
    Xie, Yuan
    Wang, Yu
    Yang, Huazhong
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2019, 38 (05) : 834 - 847