An In-Memory-Computing STT-MRAM Macro with Analog ReLU and Pooling Layers for Ultra-High Efficient Neural Network

被引:2
作者
Jiang, Linjun [1 ]
Sun, Sifan [1 ]
Ge, Jinming [1 ]
Zhang, He [1 ]
Kang, Wang [1 ]
机构
[1] Beihang Univ, Beijing, Peoples R China
来源
2023 IEEE 12TH NON-VOLATILE MEMORY SYSTEMS AND APPLICATIONS SYMPOSIUM, NVMSA | 2023年
基金
北京市自然科学基金;
关键词
in-memory computing; STT-MRAM; neural network accelerator; SRAM MACRO; COMPUTATION; TRENDS;
D O I
10.1109/NVMSA58981.2023.00026
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In-memory computing (IMC) technology has great potential for neural network accelerators. However, the energy efficiency of current mainstream IMC designs is limited, since the peripheral circuitry (e.g., ADC) for multiply-and-accumulations (MACs) and nonlinear operations (e.g., max pooling, ReLU etc.) are expensive. This paper proposes an in-memory computing STT-MRAM macro with analog ReLU and pooling layers for efficient neural networks. By implementing the activation and max pooling layers before the ADC through analog domain, the overhead (both latency and energy) of the ADC can be significantly reduced. The macro was implemented in an industrialized 22nm process and the results show that our STT-MRAM IMC macro can reduce 2.02 similar to 2.71x energy and 1.8x latency in comparison with various scales of the macro.
引用
收藏
页码:44 / 49
页数:6
相关论文
共 15 条
  • [1] CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks
    Biswas, Avishek
    Chandrakasan, Anantha P.
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2019, 54 (01) : 217 - 230
  • [2] Gonugondla SK, 2018, ISSCC DIG TECH PAP I, P490, DOI 10.1109/ISSCC.2018.8310398
  • [3] Challenges and Trends of SRAM-Based Computing-In-Memory for AI Edge Devices
    Jhang, Chuan-Jia
    Xue, Cheng-Xin
    Hung, Je-Min
    Chang, Fu-Chun
    Chang, Meng-Fan
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (05) : 1773 - 1786
  • [4] C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism
    Jiang, Zhewei
    Yin, Shihui
    Seo, Jae-Sun
    Seok, Mingoo
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (07) : 1888 - 1897
  • [5] Kim H, 2019, IEEE ASIAN SOLID STA, P35, DOI [10.1109/A-SSCC47793.2019.9056926, 10.1109/a-sscc47793.2019.9056926]
  • [6] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [7] A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators
    Lee, Eunyoung
    Han, Taeyoung
    Seo, Donguk
    Shin, Gicheol
    Kim, Jaerok
    Kim, Seonho
    Jeong, Soyoun
    Rhe, Johnny
    Park, Jaehyun
    Ko, Jong Hwan
    Lee, Yoonmyung
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (08) : 3305 - 3316
  • [8] Two-Direction In-Memory Computing Based on 10T SRAM With Horizontal and Vertical Decoupled Read Ports
    Lin, Zhiting
    Zhu, Zhiyong
    Zhan, Honglan
    Peng, Chunyu
    Wu, Xiulong
    Yao, Yuan
    Niu, Jianchao
    Chen, Junning
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2021, 56 (09) : 2832 - 2844
  • [9] Maas A. L., 2013, P ICML, V30, P3
  • [10] A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors
    Si, Xin
    Chen, Jia-Jing
    Tu, Yung-Ning
    Huang, Wei-Hsing
    Wang, Jing-Hong
    Chiu, Yen-Cheng
    Wei, Wei-Chen
    Wu, Ssu-Yen
    Sun, Xiaoyu
    Liu, Rui
    Yu, Shimeng
    Liu, Ren-Shuo
    Hsieh, Chih-Cheng
    Tang, Kea-Tiong
    Li, Qiang
    Chang, Meng-Fan
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (01) : 189 - 202