Neuromorphic Accelerators: A Comparison Between Neuroscience and Machine-Learning Approaches

被引:54
作者
Du, Zidong [1 ,2 ]
Rubin, Daniel D. Ben-Dayan [3 ]
Chen, Yunji [1 ]
He, Liqiang [4 ]
Chen, Tianshi [1 ]
Zhang, Lei [1 ,2 ]
Wu, Chengyong [1 ]
Temam, Olivier
机构
[1] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing, Peoples R China
[2] Univ CAS, Beijing, Peoples R China
[3] Intel, Haifa, Israel
[4] Inner Mongolia Univ, Coll Comp Sci, Hohhot, Inner Mongolia, Peoples R China
来源
PROCEEDINGS OF THE 48TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO-48) | 2015年
关键词
Neuromorphic; Accelerator; Comparison; NEURAL-NETWORK; NEURONS;
D O I
10.1145/2830772.2830789
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio,..). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality. Within the limit of our study (current SNN and machine learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.
引用
收藏
页码:494 / 507
页数:14
相关论文
共 66 条
[1]  
[Anonymous], 2012, P 17 C EL POW DISTR
[2]  
[Anonymous], INT C INF KNOWL MAN
[3]  
[Anonymous], FRONTIERS NEUROSCIEN
[4]  
[Anonymous], 2013, CORR, DOI DOI 10.48550/ARXIV.1306.2795
[5]  
[Anonymous], 2012, P 29 INT C MACH LEAR
[6]  
[Anonymous], 2011, CVPR 2011 WORKSH
[7]  
[Anonymous], 2013, IEEE INT C ACOUSTICS
[8]  
[Anonymous], INT S COMP ARCH
[9]  
[Anonymous], 2007, IEEE INT C ICML
[10]  
[Anonymous], 1998, P IEEE