An Approximate DRAM Design with an Adjustable Refresh Scheme for Low-power Deep Neural Networks

被引:2
|
作者
Duy Thanh Nguyen [1 ]
Kim, Hyun [2 ,3 ]
Lee, Hyuk-Jae [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Interuniv Semicond Res Ctr, Seoul 08826, South Korea
[2] Seoul Natl Univ Sci & Technol, Dept Elect & Informat Engn, Seoul 01811, South Korea
[3] Seoul Natl Univ Sci & Technol, Res Ctr Elect & Informat Technol, Seoul 01811, South Korea
关键词
Deep learning; approximate DRAM; low power DRAM; bit-level refresh; fine-grained refresh;
D O I
10.5573/JSTS.2021.21.2.134
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A DRAM device requires periodic refresh operations to preserve data integrity, which incurs significant power consumption. Slowing down the refresh rate can reduce the power consumption; however, it may cause a loss of data stored in a DRAM cell, which affects the correctness of computation. This paper proposes a new memory architecture for deep learning applications, which reduces the refresh power consumption while maintaining accuracy. Utilizing the error-tolerant property of deep learning applications, the proposed memory architecture avoids the accuracy drop caused by data loss by flexibly controlling the refresh operation for different bits, depending on their criticality. For data storage in deep learning applications, the approximate DRAM architecture reorganizes the data so that these data are mapped to different DRAM devices according to their bit significance. Critical bits are stored in more frequently refreshed devices while non-critical bits are stored in less frequently refreshed devices. Compared to the conventional DRAM, the proposed approximate DRAM requires only a separation of the chip select signal for each device in a DRAM rank and a minor change in the memory controller. Simulation results show that the refresh power consumption is reduced by 66.5 % with a negligible accuracy drop on state-of-the-art deep neural networks. Index Terms -Deep learning, approximate DRAM, low power DRAM, bit-level refresh, fine-grained refresh
引用
收藏
页码:134 / 142
页数:9
相关论文
共 50 条
  • [41] Low-Power Appliance Recognition Using Recurrent Neural Networks
    Pratama, Azkario R.
    Simanjuntak, Frans J.
    Lazovik, Alexander
    Aiello, Marco
    APPLICATIONS OF INTELLIGENT SYSTEMS, 2018, 310 : 239 - 250
  • [42] Low-power synapse/neuron cell for artificial neural networks
    Division of Circuits and Systems, Sch. Elec. Electron. Eng., N., Singapore, Singapore
    不详
    Microelectron J, 12 (1261-1264):
  • [43] A low-power synapse/neuron cell for artificial neural networks
    Lau, KT
    Lee, ST
    Chan, PK
    MICROELECTRONICS JOURNAL, 1999, 30 (12) : 1261 - 1264
  • [44] Low-Power Online ECG Analysis Using Neural Networks
    Modarressi, Mehdi
    Yasoubi, Ali
    Modarressi, Maryam
    19TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2016), 2016, : 547 - 552
  • [45] LOW-POWER BUILDING-BLOCK FOR ARTIFICIAL NEURAL NETWORKS
    LEE, ST
    LAU, KT
    ELECTRONICS LETTERS, 1995, 31 (19) : 1618 - 1619
  • [46] Low-power neural networks for semantic segmentation of satellite images
    Bahl, Gaetan
    Daniel, Lionel
    Moretti, Matthieu
    Lafarge, Florent
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 2469 - 2476
  • [47] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saunak Saha
    Henry Duwe
    Joseph Zambreno
    Journal of Signal Processing Systems, 2020, 92 : 907 - 929
  • [48] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saha, Saunak
    Duwe, Henry
    Zambreno, Joseph
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2020, 92 (09): : 907 - 929
  • [49] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [50] LOW-POWER DESIGN
    HINES, J
    KO, U
    MEIER, SE
    NAPPER, S
    PEDRAM, M
    ROY, K
    IEEE DESIGN & TEST OF COMPUTERS, 1995, 12 (04): : 84 - 90