An Approximate DRAM Design with an Adjustable Refresh Scheme for Low-power Deep Neural Networks

被引:2
|
作者
Duy Thanh Nguyen [1 ]
Kim, Hyun [2 ,3 ]
Lee, Hyuk-Jae [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Interuniv Semicond Res Ctr, Seoul 08826, South Korea
[2] Seoul Natl Univ Sci & Technol, Dept Elect & Informat Engn, Seoul 01811, South Korea
[3] Seoul Natl Univ Sci & Technol, Res Ctr Elect & Informat Technol, Seoul 01811, South Korea
关键词
Deep learning; approximate DRAM; low power DRAM; bit-level refresh; fine-grained refresh;
D O I
10.5573/JSTS.2021.21.2.134
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A DRAM device requires periodic refresh operations to preserve data integrity, which incurs significant power consumption. Slowing down the refresh rate can reduce the power consumption; however, it may cause a loss of data stored in a DRAM cell, which affects the correctness of computation. This paper proposes a new memory architecture for deep learning applications, which reduces the refresh power consumption while maintaining accuracy. Utilizing the error-tolerant property of deep learning applications, the proposed memory architecture avoids the accuracy drop caused by data loss by flexibly controlling the refresh operation for different bits, depending on their criticality. For data storage in deep learning applications, the approximate DRAM architecture reorganizes the data so that these data are mapped to different DRAM devices according to their bit significance. Critical bits are stored in more frequently refreshed devices while non-critical bits are stored in less frequently refreshed devices. Compared to the conventional DRAM, the proposed approximate DRAM requires only a separation of the chip select signal for each device in a DRAM rank and a minor change in the memory controller. Simulation results show that the refresh power consumption is reduced by 66.5 % with a negligible accuracy drop on state-of-the-art deep neural networks. Index Terms -Deep learning, approximate DRAM, low power DRAM, bit-level refresh, fine-grained refresh
引用
收藏
页码:134 / 142
页数:9
相关论文
共 50 条
  • [1] An approximate DRAM with efficient refresh schemes for low power deep learning applications
    Duy Thanh Nguyen
    Lee, Hyuk-Jae
    Kim, Hyun
    Chang, Ik-joon
    2020 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2020,
  • [2] Exploiting Refresh Effect of DRAM Read Operations: A Practical Approach to Low-Power Refresh
    Gong, Young-Ho
    Chung, Sung Woo
    IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (05) : 1507 - 1517
  • [3] AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks
    Colbert, Ian
    Kreutz-Delgado, Ken
    Das, Srinjoy
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [4] Dual-period self-refresh scheme for low-power DRAM's with on-chip PROM mode register
    Idei, Y
    Shimohigashi, K
    Aoki, M
    Noda, H
    Iwai, H
    Sato, K
    Tachibana, T
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 1998, 33 (02) : 253 - 259
  • [5] A Low Power DRAM Refresh Control Scheme for 3D Memory Cube
    Wang, Ying
    Han, Yinhe
    Li, Huawei
    2014 IEEE COOL CHIPS XVII, 2014,
  • [6] Design of Low-Power Non-Binary LDPC Decoder Exploiting DRAM Refresh Rate Over-Scaling
    Huang, Wenjie
    Tang, Weiguo
    Chen, Junlin
    Wang, Lei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2019, 66 (08) : 1391 - 1395
  • [7] Low-Power and High-Speed DRAM Readout Scheme
    Sharroush, Sherif M.
    2013 IEEE 20TH INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS (ICECS), 2013, : 791 - 794
  • [8] A Zero-Gating Processing Element Design for Low-Power Deep Convolutional Neural Networks
    Ye, Lin
    Ye, Jinghao
    Yanagisawa, Masao
    Shi, Youhua
    2019 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS 2019), 2019, : 317 - 320
  • [9] Low-Power High-Throughput LDPC Decoder Using Non-Refresh Embedded DRAM
    Park, Youn Sung
    Blaauw, David
    Sylvester, Dennis
    Zhang, Zhengya
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (03) : 783 - 794
  • [10] Low-power Implementation of Mitchell's Approximate Logarithmic Multiplication for Convolutional Neural Networks
    Kim, Min Soo
    Del Barrio, Alberto A.
    Hermida, Roman
    Bagherzadeh, Nader
    2018 23RD ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2018, : 617 - 622