Memristor-Based Neural Logic Blocks for Nonlinearly Separable Functions

被引:32
作者
Soltiz, Michael [1 ]
Kudithipudi, Dhireesha [1 ]
Merkel, Cory [1 ]
Rose, Garrett S. [2 ]
Pino, Robinson E. [3 ]
机构
[1] Rochester Inst Technol, Dept Comp Engn, Nanocomp Res Lab, Rochester, NY 14623 USA
[2] RITA, Air Force Res Lab, Trusted Syst Branch, Rome, NY 13441 USA
[3] ICF Int, Fairfax, VA 22031 USA
关键词
Neuromorphic; stochastic gradient descent; memristors; OCR; reconfigurable logic; neural networks; RECOGNITION; NETWORK;
D O I
10.1109/TC.2013.75
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural logic blocks (NLBs) enable the realization of biologically inspired reconfigurable hardware. Networks of NLBs can be trained to perform complex computations such as multilevel Boolean logic and optical character recognition (OCR) in an area-and energy-efficient manner. Recently, several groups have proposed perceptron-based NLB designs with thin-film memristor synapses. These designs are implemented using a static threshold activation function, limiting the set of learnable functions to be linearly separable. In this work, we propose two NLB designs-robust adaptive NLB (RANLB) and multithreshold NLB (MTNLB)-which overcome this limitation by allowing the effective activation function to be adapted during the training process. Consequently, both designs enable any logic function to be implemented in a single-layer NLB network. The proposed NLBs are designed, simulated, and trained to implement ISCAS-85 benchmark circuits, as well as OCR. The MTNLB achieves 90 percent improvement in the energy delay product (EDP) over lookup table (LUT)-based implementations of the ISCAS-85 benchmarks and up to a 99 percent improvement over a previous NLB implementation. As a compromise, the RANLB provides a smaller EDP improvement, but has an average training time of only approximate to 4 cycles for 4-input logic functions, compared to the MTNLBs approximate to 8-cycle average training time.
引用
收藏
页码:1597 / 1606
页数:10
相关论文
共 27 条
[1]  
[Anonymous], 2013, SIS 1 3 UNOFFICIAL D
[2]  
Chabi D., 2011, 2011 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH), P137, DOI 10.1109/NANOARCH.2011.5941495
[3]   Do we have brain to spare? [J].
Drachman, DA .
NEUROLOGY, 2005, 64 (12) :2004-2005
[4]   CMOS and Memristor-Based Neural Network Design for Position Detection [J].
Ebong, Idongesit E. ;
Mazumder, Pinaki .
PROCEEDINGS OF THE IEEE, 2012, 100 (06) :2050-2060
[5]   A field programmable neural array [J].
Farquhar, Ethan ;
Gordon, Christal ;
Hasler, Paul .
2006 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-11, PROCEEDINGS, 2006, :4114-+
[6]  
Goh T.H., 1992, IJCNN, P435
[7]   Unveiling the ISCAS-85 benchmarks: A case study in reverse engineering [J].
Hansen, MC ;
Yalcin, H ;
Hayes, JP .
IEEE DESIGN & TEST OF COMPUTERS, 1999, 16 (03) :72-80
[8]   A NOVEL FEATURE RECOGNITION NEURAL-NETWORK AND ITS APPLICATION TO CHARACTER-RECOGNITION [J].
HUSSAIN, B ;
KABUKA, MR .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1994, 16 (01) :98-106
[9]   Nanoscale Memristor Device as Synapse in Neuromorphic Systems [J].
Jo, Sung Hyun ;
Chang, Ting ;
Ebong, Idongesit ;
Bhadviya, Bhavitavya B. ;
Mazumder, Pinaki ;
Lu, Wei .
NANO LETTERS, 2010, 10 (04) :1297-1301
[10]  
Likharev KK, 2003, 2003 THIRD IEEE CONFERENCE ON NANOTECHNOLOGY, VOLS ONE AND TWO, PROCEEDINGS, P339