Local Learning in RRAM Neural Networks with Sparse Direct Feedback Alignment

被引:3
作者
Crafton, Brian [1 ]
West, Matt [2 ]
Basnet, Padip [2 ]
Vogel, Eric [2 ]
Raychowdhury, Arijit [1 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Georgia Inst Technol, Sch Mat Sci & Engn, Atlanta, GA 30332 USA
来源
2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED) | 2019年
关键词
D O I
10.1109/islped.2019.8824820
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural networks utilizing non-volatile random access memory (NVM) exhibit excellent power reduction over traditional CMOS implementations. RRAM (resistive random access memory) is one such emerging memory technology offering low energy, good endurance, and a large analog conductance window. When implemented in a crossbar architecture, these networks are able to bypass the von-Neumann bottleneck by performing compute in-memory. This architecture works well for inference; however, training the network is far more challenging. Networks built using RRAM can be trained on-chip with gradient descent or off chip where weights are transferred. Backpropagation, while effective in training von-Neumann architectures, is inefficient when memory and compute are partitioned together. Commonly referred to as the weight transport problem, each neuron's dependence on the weights and errors located deeper in the network requires reading the weights in each layer before computing and applying the error. This presents a key challenge in performing efficient on chip training for non von-Neumann architectures. In this work we demonstrate an alternative to backpropagation called sparse direct feedback alignment which bypasses the weight transport problem. We simulate crossbars of HfOx RRAM based on experimental data to explore the performance, area, and energy trade-offs of using bio-plausible algorithms on the MNIST and EMNIST datasets.
引用
收藏
页数:6
相关论文
共 15 条
  • [1] Cohen G., 2017, CORR
  • [2] Crafton B, 2019, ARXIV PREPRINT ARXIV
  • [3] Light-tuned selective photosynthesis of azo- and azoxy-aromatics using graphitic C3N4
    Dai, Yitao
    Li, Chao
    Shen, Yanbin
    Lim, Tingbin
    Xu, Jian
    Li, Yongwang
    Niemantsverdriet, Hans
    Besenbacher, Flemming
    Lock, Nina
    Su, Ren
    [J]. NATURE COMMUNICATIONS, 2018, 9
  • [4] Reservoir computing using dynamic memristors for temporal information processing
    Du, Chao
    Cai, Fuxi
    Zidan, Mohammed A.
    Ma, Wen
    Lee, Seung Hwan
    Lu, Wei D.
    [J]. NATURE COMMUNICATIONS, 2017, 8
  • [5] Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
    Gokmen, Tayfun
    Onen, Murat
    Haensch, Wilfried
    [J]. FRONTIERS IN NEUROSCIENCE, 2017, 11
  • [6] Hasan R, 2014, IEEE IJCNN, P21, DOI 10.1109/IJCNN.2014.6889893
  • [7] A Compact Model for Metal-Oxide Resistive Random Access Memory With Experiment Verification
    Jiang, Zizhen
    Wu, Yi
    Yu, Shimeng
    Yang, Lin
    Song, Kay
    Karim, Zia
    Wong, H. -S. Philip
    [J]. IEEE TRANSACTIONS ON ELECTRON DEVICES, 2016, 63 (05) : 1884 - 1892
  • [8] LeCun Y., 1990, Handwritten digit recognition with a back-propagation network, P396
  • [9] Random synaptic feedback weights support error backpropagation for deep learning
    Lillicrap, Timothy P.
    Cownden, Daniel
    Tweed, Douglas B.
    Akerman, Colin J.
    [J]. NATURE COMMUNICATIONS, 2016, 7
  • [10] A million spiking-neuron integrated circuit with a scalable communication network and interface
    Merolla, Paul A.
    Arthur, John V.
    Alvarez-Icaza, Rodrigo
    Cassidy, Andrew S.
    Sawada, Jun
    Akopyan, Filipp
    Jackson, Bryan L.
    Imam, Nabil
    Guo, Chen
    Nakamura, Yutaka
    Brezzo, Bernard
    Vo, Ivan
    Esser, Steven K.
    Appuswamy, Rathinakumar
    Taba, Brian
    Amir, Arnon
    Flickner, Myron D.
    Risk, William P.
    Manohar, Rajit
    Modha, Dharmendra S.
    [J]. SCIENCE, 2014, 345 (6197) : 668 - 673