Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

被引:764
作者
Rueckauer, Bodo [1 ,2 ]
Lungu, Iulia-Alexandra [1 ,2 ]
Hu, Yuhuang [1 ,2 ]
Pfeiffer, Michael [1 ,2 ,3 ]
Liu, Shih-Chii [1 ,2 ]
机构
[1] Univ Zurich, Inst Neuroinformat, Zurich, Switzerland
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Bosch Ctr Artificial Intelligence, Renningen, Germany
关键词
artificial neural network; spiking neural network; deep learning; object classification; deep networks; spiking network conversion;
D O I
10.3389/fnins.2017.00682
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and lnception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.
引用
收藏
页数:12
相关论文
共 47 条
[1]  
[Anonymous], 2016, BinaryNet: Training deep neural networks with weights and activa
[2]  
[Anonymous], 2016, ARXIV161105141
[3]  
[Anonymous], 2016, Theory and tools for the conversion of analog to spiking convolutional neural networks
[4]   Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing-Application to Feedforward ConvNets [J].
Antonio Perez-Carrasco, Jose ;
Zhao, Bo ;
Serrano, Carmen ;
Acha, Begona ;
Serrano-Gotarredona, Teresa ;
Chen, Shouchun ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2706-2719
[5]   Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations [J].
Benjamin, Ben Varkey ;
Gao, Peiran ;
McQuinn, Emmett ;
Choudhary, Swadesh ;
Chandrasekaran, Anand R. ;
Bussat, Jean-Marie ;
Alvarez-Icaza, Rodrigo ;
Arthur, John V. ;
Merolla, Paul A. ;
Boahen, Kwabena .
PROCEEDINGS OF THE IEEE, 2014, 102 (05) :699-716
[6]   Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition [J].
Cao, Yongqiang ;
Chen, Yang ;
Khosla, Deepak .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) :54-66
[7]  
Cassidy AS, 2013, IEEE IJCNN
[8]   Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks [J].
Chen, Yu-Hsin ;
Krishna, Tushar ;
Emer, Joel S. ;
Sze, Vivienne .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) :127-138
[9]  
Chollet F., 2015, KERAS VERSION 2 0 CO
[10]  
Courbariaux M., 2015, ADV NEURAL INFORM PR, V28, P1