Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition

被引:596
作者
Cao, Yongqiang [1 ]
Chen, Yang [1 ]
Khosla, Deepak [1 ]
机构
[1] HRL Labs LLC, Malibu, CA 90265 USA
关键词
Deep learning; Machine learning; Convolutional neural networks; Spiking neural networks; Neuromorphic circuits; Object recognition; RECEPTIVE FIELDS; VIEW-INVARIANT; REPRESENTATIONS; NEOCOGNITRON; MODELS;
D O I
10.1007/s11263-014-0788-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.
引用
收藏
页码:54 / 66
页数:13
相关论文
共 40 条
[1]  
[Anonymous], P SPIE
[2]  
[Anonymous], IEEE INT S CIRC SYST
[3]  
[Anonymous], 2014, P 2 INT C LEARN REPR
[4]  
[Anonymous], THESIS U PARIS EST
[5]  
[Anonymous], NEOVISION2 ANNOTATED
[6]  
[Anonymous], 2017, ACM, DOI [DOI 10.2165/00129785-200404040-00005, DOI 10.1145/3065386]
[7]  
[Anonymous], 2014, 2 INT C LEARN REPR I
[8]  
[Anonymous], 2011, BIGLEARN NIPS WORKSH
[9]  
[Anonymous], 2011, Custom Integrated Circuits Conference
[10]  
[Anonymous], CONVOLUTIONAL NEUR 3