Resistive Memory-Based In-Memory Computing: From Device and Large-Scale Integration System Perspectives

被引:69
作者
Yan, Bonan [1 ]
Li, Bing [1 ]
Qiao, Ximing [1 ]
Xue, Cheng-Xin [2 ]
Chang, Meng-Fan [2 ]
Chen, Yiran [1 ]
Li, Hai [1 ]
机构
[1] Duke Univ, Dept Elect & Comp Engn, 100 Sci Dr, Durham, NC 27708 USA
[2] Natl Tsing Hua Univ, Dept Elect Engn, Delta Bldg 101,Sect 2,Kuang Fu Rd, Hsinchu 30013, Taiwan
基金
美国国家科学基金会;
关键词
accelerators; in-memory computing; neural networks; process-in-memory; resistive memory; NONVOLATILE MEMORY; LOGIC OPERATIONS; NEURAL-NETWORKS; RRAM; MECHANISM; SYNAPSE; ARRAY;
D O I
10.1002/aisy.201900068
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In-memory computing is a computing scheme that integrates data storage and arithmetic computation functions. Resistive random access memory (RRAM) arrays with innovative peripheral circuitry provide the capability of performing vector-matrix multiplication beyond the basic Boolean logic. With such a memory-computation duality, RRAM-based in-memory computing enables an efficient hardware solution for matrix-multiplication-dependent neural networks and related applications. Herein, the recent development of RRAM nanoscale devices and the parallel progress on circuit and microarchitecture layers are discussed. Well suited for analog synapse and neuron implementation, RRAM device properties and characteristics are emphasized herein. 3D-stackable RRAM and on-chip training are introduced in large-scale integration. The circuit design and system organization of RRAM-based in-memory computing are essential to breaking the von Neumann bottleneck. These outcomes illuminate the way for the large-scale implementation of ultra-low-power and dense neural network accelerators.
引用
收藏
页数:16
相关论文
共 135 条
  • [1] ABBOTT LF, 1990, LECT NOTES PHYS, V368, P5
  • [2] 3-D Memristor Crossbars for Analog and Neuromorphic Computing Applications
    Adam, Gina C.
    Hoskins, Brian D.
    Prezioso, Mirko
    Merrikh-Bayat, Farnood
    Chakrabarti, Bhaswar
    Strukov, Dmitri B.
    [J]. IEEE TRANSACTIONS ON ELECTRON DEVICES, 2017, 64 (01) : 312 - 318
  • [3] CHARACTERISTICS OF RANDOM NETS OF ANALOG NEURON-LIKE ELEMENTS
    AMARI, S
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1972, SMC2 (05): : 643 - &
  • [4] Ambrose S., 2014, International Sandwich & Snack News, P14
  • [5] PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference
    Ankit, Aayush
    El Hajj, Izzat
    Chalamalasetti, Sai Rahul
    Ndu, Geoffrey
    Foltin, Martin
    Williams, R. Stanley
    Faraboschi, Paolo
    Hwu, Wen-mei
    Strachan, John Paul
    Roy, Kaushik
    Milojicic, Dejan S.
    [J]. TWENTY-FOURTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXIV), 2019, : 715 - 731
  • [6] [Anonymous], 2015, BinaryConnect: Training Deep Neural Networks with binary weights during propagations
  • [7] [Anonymous], 2012, COMPUTER BRAIN
  • [8] [Anonymous], 2014, IEEE Transactions on Computers
  • [9] [Anonymous], 2011, COMPUTER ARCHITECTUR
  • [10] Multiple Memory States in Resistive Switching Devices Through Controlled Size and Orientation of the Conductive Filament
    Balatti, S.
    Larentis, S.
    Gilmer, D. C.
    Ielmini, D.
    [J]. ADVANCED MATERIALS, 2013, 25 (10) : 1474 - 1478