Integration of Ag-CBRAM crossbars and Mott ReLU neurons for efficient implementation of deep neural networks in hardware

被引:2
作者
Shi, Yuhan [1 ]
Oh, Sangheon [1 ]
Park, Jaeseoung [1 ]
del Valle, Javier [2 ]
Salev, Pavel [3 ]
Schuller, Ivan K. [3 ]
Kuzum, Duygu [1 ]
机构
[1] Univ Calif San Diego, Dept Elect & Comp Engn, La Jolla, CA 92093 USA
[2] Univ Geneva, Dept Quantum Matter Phys, Geneva, Switzerland
[3] Univ Calif San Diego, Dept Phys, La Jolla, CA USA
来源
NEUROMORPHIC COMPUTING AND ENGINEERING | 2023年 / 3卷 / 03期
基金
美国国家科学基金会;
关键词
RRAM; memristor; Mott insulators; crossbar; activation functions; neural networks; ACTIVATION FUNCTION; RRAM;
D O I
10.1088/2634-4386/aceea9
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In-memory computing with emerging non-volatile memory devices (eNVMs) has shown promising results in accelerating matrix-vector multiplications. However, activation function calculations are still being implemented with general processors or large and complex neuron peripheral circuits. Here, we present the integration of Ag-based conductive bridge random access memory (Ag-CBRAM) crossbar arrays with Mott rectified linear unit (ReLU) activation neurons for scalable, energy and area-efficient hardware (HW) implementation of deep neural networks. We develop Ag-CBRAM devices that can achieve a high ON/OFF ratio and multi-level programmability. Compact and energy-efficient Mott ReLU neuron devices implementing ReLU activation function are directly connected to the columns of Ag-CBRAM crossbars to compute the output from the weighted sum current. We implement convolution filters and activations for VGG-16 using our integrated HW and demonstrate the successful generation of feature maps for CIFAR-10 images in HW. Our approach paves a new way toward building a highly compact and energy-efficient eNVMs-based in-memory computing system.
引用
收藏
页数:12
相关论文
共 5 条
  • [1] An efficient hardware implementation of feed-forward neural networks
    Szabó, T
    Horváth, G
    APPLIED INTELLIGENCE, 2004, 21 (02) : 143 - 158
  • [2] An Efficient Hardware Implementation of Feed-Forward Neural Networks
    Tamás Szab#x00F3;
    Gábor Horv#x00E1;th
    Applied Intelligence, 2004, 21 : 143 - 158
  • [3] Flash Memory Array for Efficient Implementation of Deep Neural Networks
    Han, Runze
    Xiang, Yachen
    Huang, Peng
    Shan, Yihao
    Liu, Xiaoyan
    Kang, Jinfeng
    ADVANCED INTELLIGENT SYSTEMS, 2021, 3 (05)
  • [4] Efficient Hardware Implementation of Nonlinear Moving-horizon State Estimation with Artificial Neural Networks
    Vatanabe Brunello, Rafael Koji
    Sampaio, Renato Coral
    Llanos, Carlos H.
    Coelho, Leandro dos Santos
    Hultmann Ayala, Helon Vicente
    IFAC PAPERSONLINE, 2020, 53 (02): : 7813 - 7818
  • [5] Hardware implementation of evolvable block-based neural networks utilizing a cost efficient sigmoid-like activation function
    Nambiar, Vishnu P.
    Khalil-Hani, Mohamed
    Sahnoun, Riadh
    Marsono, M. N.
    NEUROCOMPUTING, 2014, 140 : 228 - 241