Hardware-Based Spiking Neural Network Using a TFT-Type AND Flash Memory Array Architecture Based on Direct Feedback Alignment

被引:13
作者
Kang, Won-Mook [1 ]
Kwon, Dongseok [1 ]
Woo, Sung Yun [1 ]
Lee, Soochang [1 ]
Yoo, Honam [1 ]
Kim, Jangsaeng [1 ]
Park, Byung-Gook [1 ]
Lee, Jong-Ho [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Interuniv Semicond Res Ctr, Seoul 08826, South Korea
关键词
Hardware-based neural network; spiking neural network; flash memory synaptic device; AND type crossbar array; on-chip training; supervised learning; direct feedback alignment; SYSTEM; PLASTICITY;
D O I
10.1109/ACCESS.2021.3080310
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A hardware-based neural network that enables on-chip training using a thin-film transistor-type AND flash memory array architecture is designed. The synaptic device constituting the array is characterized by a doped p-type body, a gate insulator stack composed of SiO2/Si3N4/Al2O3, and a partially curved poly-Si channel. The body reduces the circuit burden on the high voltage driver required for both the source and drain lines when changing the synaptic weights. The high-kappa material included in the gate insulator stack helps to lower the operating voltage of the device. As the device scales down, the structural characteristics of the device have the potential to increase the efficiency of the memory operation and the immunity to the voltage drop effect that occurs in the bit-lines of the array. In an AND array architecture using fabricated synaptic devices, a pulse scheme for selective memory operation is proposed and verified experimentally. Due to the direct feedback alignment (DFA) algorithm, which does not need to have the same synaptic weight in the forward path and backward path, the AND array architecture can be utilized in designing an efficient on-chip training neural network. Pulse schemes suitable for the proposed AND array architecture are also devised to implement the DFA algorithm in neural networks. In a system-level simulation, a recognition accuracy of up to 97.01% is obtained in the MNIST pattern learning task based on the proposed pulse scheme and computing architecture.
引用
收藏
页码:73121 / 73132
页数:12
相关论文
共 42 条
  • [1] [Anonymous], 1989, P ADV NEURAL INFORM
  • [2] Neuromorphic computing with multi-memristive synapses
    Boybat, Irem
    Le Gallo, Manuel
    Nandakumar, S. R.
    Moraitis, Timoleon
    Parnell, Thomas
    Tuma, Tomas
    Rajendran, Bipin
    Leblebici, Yusuf
    Sebastian, Abu
    Eleftheriou, Evangelos
    [J]. NATURE COMMUNICATIONS, 2018, 9
  • [3] Chanthbouala A, 2012, NAT MATER, V11, P860, DOI [10.1038/nmat3415, 10.1038/NMAT3415]
  • [4] Local Learning in RRAM Neural Networks with Sparse Direct Feedback Alignment
    Crafton, Brian
    West, Matt
    Basnet, Padip
    Vogel, Eric
    Raychowdhury, Arijit
    [J]. 2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2019,
  • [5] Unsupervised learning of digit recognition using spike-timing-dependent plasticity
    Diehl, Peter U.
    Cook, Matthew
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2015, 9
  • [6] A single-transistor silicon synapse
    Diorio, C
    Hasler, P
    Minch, A
    Mead, CA
    [J]. IEEE TRANSACTIONS ON ELECTRON DEVICES, 1996, 43 (11) : 1972 - 1980
  • [7] Eryilmaz S.B., 2013, Technical Digest - International Electron Devices Meeting (IEDM), p25.5. 1, DOI [DOI 10.1109/IEDM.2013.6724691, 10.1109/IEDM. 2013.6724691]
  • [8] A Low-Cost High-Speed Neuromorphic Hardware Based on Spiking Neural Network
    Farsa, Edris Zaman
    Ahmadi, Arash
    Maleki, Mohammad Ali
    Gholami, Morteza
    Rad, Hima Nikafshan
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2019, 66 (09) : 1582 - 1586
  • [9] Direct Feedback Alignment With Sparse Connections for Local Learning
    Grafton, Brian
    Parihar, Abhinav
    Gebhardt, Evan
    Raychowdhury, Anjit
    [J]. FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [10] GROSSBERG S, 1987, COGNITIVE SCI, V11, P23, DOI 10.1111/j.1551-6708.1987.tb00862.x