A Stochastic Computational Multi-Layer Perceptron with Backward Propagation

被引:69
作者
Liu, Yidong [1 ]
Liu, Siting [1 ]
Wang, Yanzhi [2 ]
Lombardi, Fabrizio [3 ]
Han, Jie [1 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 1H9, Canada
[2] Syracuse Univ, Elect Engn & Comp Sci Dept, Syracuse, NY 13244 USA
[3] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Stochastic computation; binary search; neural network; probability estimator; multi-layer perceptron; HARDWARE IMPLEMENTATION; NEURAL-NETWORK;
D O I
10.1109/TC.2018.2817237
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Stochastic computation has recently been proposed for implementing artificial neural networks with reduced hardware and power consumption, but at a decreased accuracy and processing speed. Most existing implementations are based on pre-training such that the weights are predetermined for neurons at different layers, thus these implementations lack the ability to update the values of the network parameters. In this paper, a stochastic computational multi-layer perceptron (SC-MLP) is proposed by implementing the backward propagation algorithm for updating the layer weights. Using extended stochastic logic (ESL), a reconfigurable stochastic computational activation unit (SCAU) is designed to implement different types of activation functions such as the tanh and the rectifier function. A triple modular redundancy (TMR) technique is employed for reducing the random fluctuations in stochastic computation. A probability estimator (PE) and a divider based on the TMR and a binary search algorithm are further proposed with progressive precision for reducing the required stochastic sequence length. Therefore, the latency and energy consumption of the SC-MLP are significantly reduced. The simulation results show that the proposed design is capable of implementing both the training and inference processes. For the classification of nonlinearly separable patterns, at a slight loss of accuracy by 1.32-1.34 percent, the proposed design requires only 28.5-30.1 percent of the area and 18.9-23.9 percent of the energy consumption incurred by a design using floating point arithmetic. Compared to a fixed-point implementation, the SC-MLP consumes a smaller area (40.7-45.5 percent) and a lower energy consumption (38.0-51.0 percent) with a similar processing speed and a slight drop of accuracy by 0.15-0.33 percent. The area and the energy consumption of the proposed design is from 80.7-87.1 percent and from 71.9-93.1 percent, respectively, of a binarized neural network (BNN), with a similar accuracy.
引用
收藏
页码:1273 / 1286
页数:14
相关论文
共 28 条
[11]  
Courbariaux M, 2015, ADV NEUR IN, V28
[12]  
Gaines B.R., 1969, Adv. Inf. Syst., P37
[13]  
Glorot X., 2011, 14 INT C ART INT STA, P315
[14]  
Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947
[15]  
Haykin S., 2009, NEURAL NETWORKS LEAR
[16]   A digital hardware pulse-mode neuron with plecewise linear activation function [J].
Hikawa, H .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (05) :1028-1037
[17]  
Hubara I, 2016, ADV NEUR IN, V29
[18]  
Ji Y, 2015, DES AUT TEST EUROPE, P880
[19]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[20]   Logical Computation on Stochastic Bit Streams with Linear Finite-State Machines [J].
Li, Peng ;
Lilja, David J. ;
Qian, Weikang ;
Riedel, Marc D. ;
Bazargan, Kia .
IEEE TRANSACTIONS ON COMPUTERS, 2014, 63 (06) :1473-1485