Stochastic Computing for Hardware Implementation of Binarized Neural Networks

被引:29
作者
Hirtzlin, Tifenn [1 ]
Penkovsky, Bogdan [1 ]
Bocquet, Marc [2 ]
Klein, Jacques-Olivier [1 ]
Portal, Jean-Michel [2 ]
Querlioz, Damien [1 ]
机构
[1] Univ Paris Sud, CNRS, Ctr Nanosci & Nanotechnol, F-91120 Palaiseau, France
[2] Univ Aix Marseille & Toulon, CNRS, Inst Mat Microelect Nanosci Provence, F-13451 Marseille, France
基金
欧洲研究理事会;
关键词
Binarized neural network; stochastic computing; embedded system; MRAM; in memory computing; MEMORY;
D O I
10.1109/ACCESS.2019.2921104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binarized neural networks, a recently discovered class of neural networks with minimal memory requirements and no reliance on multiplication, are a fantastic opportunity for the realization of compact and energy efficient inference hardware. However, such neural networks are generally not entirely binarized: their first layer remains with fixed point input. In this paper, we propose a stochastic computing version of binarized neural networks, where the input is also binarized. The simulations on the example of the Fashion-MNIST and CIFAR-10 datasets show that such networks can approach the performance of conventional binarized neural networks. We evidence that the training procedure should be adapted for use with stochastic computing. Finally, the ASIC implementation of our scheme is investigated, in a system that closely associates logic and memory, implemented by spin torque magnetoresistive random access memory. This analysis shows that the stochastic computing approach can allow considerable savings with regards to conventional binarized neural networks in terms of area (62% area reduction on the Fashion-MNIST task). It can also allow important savings in terms of energy consumption if we accept a reasonable reduction of accuracy: for example a factor 2.1 can be saved, with the cost of 1.4% in Fashion-MNIST test accuracy. These results highlight the high potential of binarized neural networks for hardware implementation, and that adapting them to hardware constraints can provide important benefits.
引用
收藏
页码:76394 / 76403
页数:10
相关论文
共 34 条
[1]   Survey of Stochastic Computing [J].
Alaghi, Armin ;
Hayes, John P. .
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2013, 12
[2]  
Ando K, 2017, SYMP VLSI CIRCUITS, pC24, DOI 10.23919/VLSIC.2017.8008533
[3]  
[Anonymous], ARXIV1602028
[4]  
[Anonymous], 2017, J MACH LEARN RES
[5]  
[Anonymous], 2018, Nature
[6]  
[Anonymous], IEDM
[7]   VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing [J].
Ardakani, Arash ;
Leduc-Primeau, Francois ;
Onizawa, Naoya ;
Hanyu, Takahiro ;
Gross, Warren J. .
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (10) :2688-2699
[8]  
Bade S. L., 1994, Proceedings IEEE Workshop on FPGAs for Custom Computing Machines (Cat. No.94TH0611-4), P189, DOI 10.1109/FPGA.1994.315612
[9]   Neuromorphic computing using non-volatile memory [J].
Burr, Geoffrey W. ;
Shelby, Robert M. ;
Sebastian, Abu ;
Kim, Sangbum ;
Kim, Seyoung ;
Sidler, Severin ;
Virwani, Kumar ;
Ishii, Masatoshi ;
Narayanan, Pritish ;
Fumarola, Alessandro ;
Sanches, Lucas L. ;
Boybat, Irem ;
Le Gallo, Manuel ;
Moon, Kibong ;
Woo, Jiyoo ;
Hwang, Hyunsang ;
Leblebici, Yusuf .
ADVANCES IN PHYSICS-X, 2017, 2 (01) :89-124
[10]   A New Stochastic Computing Methodology for Efficient Neural Network Implementation [J].
Canals, Vincent ;
Morro, Antoni ;
Oliver, Antoni ;
Alomar, Miquel L. ;
Rossello, Josep L. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (03) :551-564