Scalable Energy-Efficient, Low-Latency Implementations of Trained Spiking Deep Belief Networks on SpiNNaker

被引:0
|
作者
Stromatias, Evangelos [1 ]
Neil, Daniel [3 ,4 ]
Galluppi, Francesco [2 ]
Pfeiffer, Michael [3 ,4 ]
Liu, Shih-Chii [3 ,4 ]
Furber, Steve [1 ]
机构
[1] Univ Manchester, Sch Comp Sci, Adv Processor Technol Grp, Manchester M13 9PL, Lancs, England
[2] Univ Paris 06, CNRS UMR 7210, Equipe Vis & Calcul Nat, Vis Inst,UMR Inserm S968,CHNO Quinze Vingts, Paris, France
[3] Univ Zurich, Inst Neuroinformat, CH-8057 Zurich, Switzerland
[4] ETH, CH-8057 Zurich, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Low-Latency and Energy-Efficient Data Preservation Mechanism in Low-Duty-Cycle Sensor Networks
    Jiang, Chan
    Li, Tao-Shen
    Liang, Jun-Bin
    Wu, Heng
    SENSORS, 2017, 17 (05):
  • [22] Cooperative Activation and Caching Strategy for Low-Latency and Energy-Efficient Small-Cell Networks
    Luo, Jingjing
    Wang, Qinsong
    Zheng, Fu-Chun
    Gao, Lin
    Gu, Shushi
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (04) : 756 - 760
  • [23] ACE-SNN: Algorithm-Hardware Co-design of Energy-Efficient & Low-Latency Deep Spiking Neural Networks for 3D Image Recognition
    Datta, Gourav
    Kundu, Souvik
    Jaiswal, Akhilesh R.
    Beerel, Peter A.
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [24] Low-Latency Energy-Efficient Cyber-Physical Disaster System Using Edge Deep Learning
    Patel, Yashwant Singh
    Banerjee, Sourasekhar
    Misra, Rajiv
    Das, Sajal K.
    PROCEEDINGS OF THE 21ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING (ICDCN 2020), 2020,
  • [25] An adaptive energy-efficient and low-latency MAC for tree-based data gathering in sensor networks
    Lu, Gang
    Krishnamachari, Bhaskar
    Raghavendra, Cauligi S.
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2007, 7 (07): : 863 - 875
  • [26] 5G: Towards Energy-Efficient, Low-Latency and High-Reliable Communications Networks
    Zhang, Shunqing
    Xu, Xiuqiang
    Wu, Yiqun
    Lu, Lei
    2014 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS (ICCS), 2014, : 197 - 201
  • [27] Live Demonstration: Handwritten Digit Recognition Using Spiking Deep Belief Networks on SpiNNaker
    Stromatias, Evangelos
    Neil, Daniel
    Galluppi, Francesco
    Pfeiffer, Michael
    Liu, Shih-Chii
    Furber, Steve
    2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2015, : 1901 - 1901
  • [28] Scalable Low-Latency Inter-FPGA Networks
    Kien Trung Pham
    Truong Thao Nguyen
    Yamaguchi, Hiroshi
    Urino, Yutaka
    Koibuchi, Michihiro
    2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2022), 2022, : 234 - 245
  • [29] Low-Power Low-Latency BCH Decoders for Energy-Efficient Optical Interconnects
    Fougstedt, C.
    Szczerba, K.
    Larsson-Edefors, P.
    JOURNAL OF LIGHTWAVE TECHNOLOGY, 2017, 35 (23) : 5201 - 5207
  • [30] Optimized Potential Initialization for Low-Latency Spiking Neural Networks
    Bu, Tong
    Ding, Jianhao
    Yu, Zhaofei
    Huang, Tiejun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11 - 20