Low-Power FPGA-Based Spiking Neural Networks for Real-Time Decoding of Intracortical Neural Activity

被引:0
|
作者
Martis, Luca [1 ]
Leone, Gianluca [1 ]
Raffo, Luigi [1 ]
Meloni, Paolo [1 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
关键词
Decoding; Field programmable gate arrays; Accuracy; Spiking neural networks; Real-time systems; Signal processing algorithms; Microelectrodes; Hardware; Biological neural networks; Training; FPGA; low power; neural decoding; real time; spike detection; spiking neural network (SNN); BRAIN;
D O I
10.1109/JSEN.2024.3487021
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Brain-machine interfaces (BMIs) are systems designed to decode neural signals and translate them into commands for external devices. Intracortical microelectrode arrays (MEAs) represent a significant advancement in this field, offering unprecedented spatial and temporal resolutions for monitoring brain activity. However, processing data from MEAs presents challenges due to high data rates and computing power requirements. To address these challenges, we propose a novel solution leveraging spiking neural networks (SNNs) that, due to their similarity to biological neural networks and their event-based nature, promise high compatibility with neural signals and low energy consumption. In this study, we introduce a real-time neural decoding system based on an SNN, deployed on a Lattice iCE40UP5k FPGA. This system is capable of reconstructing multiple target variables, related to the kinematics and kinetics of hand motion, from iEEG signals recorded by a 96-channel MEA. We evaluated the system using two different public datasets, achieving results similar to state-of-the-art neural decoders that use more complex deep learning models. This was obtained while maintaining an average power consumption of 13.9 mW and an average energy consumption per inference of 13.9 uJ.
引用
收藏
页码:42448 / 42459
页数:12
相关论文
共 50 条
  • [21] Pruning Binarized Neural Networks Enables Low-Latency, Low-Power FPGA-Based Handwritten Digit Classification
    Payra, Syamantak
    Loke, Gabriel
    Fink, Yoel
    Steinmeyer, Joseph D.
    2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC, 2023,
  • [22] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [23] STDP Design Trade-offs for FPGA-Based Spiking Neural Networks
    Medina Morillas, Rafael
    Ituero, Pablo
    2020 XXXV CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS (DCIS), 2020,
  • [24] A reconfigurable FPGA-based spiking neural network accelerator
    Yin, Mingqi
    Cui, Xiaole
    Wei, Feng
    Liu, Hanqing
    Jiang, Yuanyuan
    Cui, Xiaoxin
    MICROELECTRONICS JOURNAL, 2024, 152
  • [25] Flexible Deep-pipelined FPGA-based Accelerator for Spiking Neural Networks
    Lopez-Asuncion, Samuel
    Ituero Herrero, Pablo
    2023 38TH CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS, DCIS, 2023,
  • [26] A Reconfigurable Streaming Processor for Real-Time Low-Power Execution of Convolutional Neural Networks at the Edge
    Sanchez, Justin
    Soltani, Nasim
    Kulkarni, Pratik
    Chamarthi, Ramachandra Vikas
    Tabkhi, Hamed
    EDGE COMPUTING - EDGE 2018, 2018, 10973 : 49 - 64
  • [27] Real-time low-power binocular stereo vision based on FPGA
    Gang Wu
    Jinglei Yang
    Hao Yang
    Journal of Real-Time Image Processing, 2022, 19 : 29 - 39
  • [28] Real-time low-power binocular stereo vision based on FPGA
    Wu, Gang
    Yang, Jinglei
    Yang, Hao
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2022, 19 (01) : 29 - 39
  • [29] A generalized hardware architecture for real-time spiking neural networks
    Valencia, Daniel
    Alimohammad, Amir
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (24): : 17821 - 17835
  • [30] A generalized hardware architecture for real-time spiking neural networks
    Daniel Valencia
    Amir Alimohammad
    Neural Computing and Applications, 2023, 35 : 17821 - 17835