A Noniterative Supervised On-Chip Training Circuitry for Reservoir Computing Systems

被引:0
|
作者
Galan-Prado, Fabio [1 ]
Rossello, Josep L. [1 ,2 ]
机构
[1] Univ Balearic Isl, Elect Engn Grp, Ind Engn & Construct Dept, Palma De Mallorca 07122, Spain
[2] Balearic Isl Hlth Res Inst IdISBa, Palma De Mallorca 07010, Spain
关键词
Training; Reservoirs; Hardware; System-on-chip; Linear matrix inequalities; Manganese; Artificial neural networks; Edge computing; max-plus algebra; neuromorphic hardware; reservoir computing (RC); NEURAL-NETWORKS; ANALOG;
D O I
10.1109/TNNLS.2022.3201828
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks (ANNs) is an exponentially growing field, mainly because of its wide range of applications to everyday life such as pattern recognition or time series forecasting. In particular, reservoir computing (RC) arises as an optimal computational framework suited for temporal/sequential data analysis. The direct on-silicon implementation of RCs may help to minimize power and maximize processing speed, that is especially relevant in edge intelligence applications where energy storage is considerably restricted. Nevertheless, most of the RC hardware solutions present in the literature perform the training process off-chip at the server level, thus increasing processing time and overall power dissipation. Some studies integrate both learning and inference on the same chip, although these works are normally oriented to implement unsupervised learning (UL) with a lower expected accuracy than supervised learning (SL), or propose iterative solutions (with a subsequent higher power consumption). Therefore, the integration of RC systems including both inference and a fast noniterative SL method is still an incipient field. In this article, we propose a noniterative SL methodology for RC systems that can be implemented on hardware either sequentially or fully parallel. The proposal presents a considerable advantage in terms of energy efficiency (EE) and processing speed if compared to traditional off-chip methods. In order to prove the validity of the model, a cyclic echo state NN with on-chip learning capabilities for time series prediction has been implemented and tested in a field-programmable gate array (FPGA). Also, a low-cost audio processing method is proposed that may be used to optimize the sound preprocessing steps.
引用
收藏
页码:4097 / 4109
页数:13
相关论文
共 50 条
  • [1] On-chip Parallel Photonic Reservoir Computing using Multiple Delay lines
    Hasnain, Syed Ali
    Mahapatra, Rabi
    2020 IEEE 32ND INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD 2020), 2020, : 28 - 34
  • [2] Efficient parallel implementation of reservoir computing systems
    Alomar, M. L.
    Skibinsky-Gitlin, Erik S.
    Frasser, Christiam F.
    Canals, Vincent
    Isern, Eugeni
    Roca, Miquel
    Rossello, Josep L.
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (07) : 2299 - 2313
  • [3] On-Chip Communication Network for Efficient Training of Deep Convolutional Networks on Heterogeneous Manycore Systems
    Choi, Wonje
    Duraisamy, Karthi
    Kim, Ryan Gary
    Doppa, Janardhan Rao
    Pande, Partha Pratim
    Marculescu, Diana
    Marculescu, Radu
    IEEE TRANSACTIONS ON COMPUTERS, 2018, 67 (05) : 672 - 686
  • [4] DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training
    Garofalo, Angelo
    Tortorella, Yvan
    Perotti, Matteo
    Valente, Luca
    Nadalini, Alessandro
    Benini, Luca
    Rossi, Davide
    Conti, Francesco
    IEEE OPEN JOURNAL OF THE SOLID-STATE CIRCUITS SOCIETY, 2022, 2 : 231 - 243
  • [5] Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems
    Zhang, Qingtian
    Wu, Huaqiang
    Yao, Peng
    Zhang, Wenqiang
    Gao, Bin
    Deng, Ning
    Qian, He
    NEURAL NETWORKS, 2018, 108 : 217 - 223
  • [6] On-Chip Training of Crosstalk Predictors to Fit Uncertainties
    Sadeghi, Rezgar
    Akbari, Ehsan
    Saber, Mohamad Ali
    2022 IEEE EUROPEAN TEST SYMPOSIUM (ETS 2022), 2022,
  • [7] On-chip training of memristor crossbar based multi-layer neural networks
    Hasan, Raqibul
    Taha, Tarek M.
    Yakopcic, Chris
    MICROELECTRONICS JOURNAL, 2017, 66 : 31 - 40
  • [8] CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays
    Jiang, Hongwu
    Peng, Xiaochen
    Huang, Shanshi
    Yu, Shimeng
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (07) : 944 - 954
  • [9] Variation Tolerant RRAM Based Synaptic Architecture for On-Chip Training
    Dongre, Ashvinikumar
    Trivedi, Gaurav
    IEEE TRANSACTIONS ON NANOTECHNOLOGY, 2023, 22 : 436 - 444
  • [10] SpaRCe: Improved Learning of Reservoir Computing Systems Through Sparse Representations
    Manneschi, Luca
    Lin, Andrew C.
    Vasilaki, Eleni
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) : 824 - 838