A Noniterative Supervised On-Chip Training Circuitry for Reservoir Computing Systems

被引:0
|
作者
Galan-Prado, Fabio [1 ]
Rossello, Josep L. [1 ,2 ]
机构
[1] Univ Balearic Isl, Elect Engn Grp, Ind Engn & Construct Dept, Palma De Mallorca 07122, Spain
[2] Balearic Isl Hlth Res Inst IdISBa, Palma De Mallorca 07010, Spain
关键词
Training; Reservoirs; Hardware; System-on-chip; Linear matrix inequalities; Manganese; Artificial neural networks; Edge computing; max-plus algebra; neuromorphic hardware; reservoir computing (RC); NEURAL-NETWORKS; ANALOG;
D O I
10.1109/TNNLS.2022.3201828
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks (ANNs) is an exponentially growing field, mainly because of its wide range of applications to everyday life such as pattern recognition or time series forecasting. In particular, reservoir computing (RC) arises as an optimal computational framework suited for temporal/sequential data analysis. The direct on-silicon implementation of RCs may help to minimize power and maximize processing speed, that is especially relevant in edge intelligence applications where energy storage is considerably restricted. Nevertheless, most of the RC hardware solutions present in the literature perform the training process off-chip at the server level, thus increasing processing time and overall power dissipation. Some studies integrate both learning and inference on the same chip, although these works are normally oriented to implement unsupervised learning (UL) with a lower expected accuracy than supervised learning (SL), or propose iterative solutions (with a subsequent higher power consumption). Therefore, the integration of RC systems including both inference and a fast noniterative SL method is still an incipient field. In this article, we propose a noniterative SL methodology for RC systems that can be implemented on hardware either sequentially or fully parallel. The proposal presents a considerable advantage in terms of energy efficiency (EE) and processing speed if compared to traditional off-chip methods. In order to prove the validity of the model, a cyclic echo state NN with on-chip learning capabilities for time series prediction has been implemented and tested in a field-programmable gate array (FPGA). Also, a low-cost audio processing method is proposed that may be used to optimize the sound preprocessing steps.
引用
收藏
页码:4097 / 4109
页数:13
相关论文
共 50 条
  • [41] Behavioural modelling of on-chip optical interconnect with potential applications for testing linking-with-light circuits and systems
    Kadim, H.J.
    2008, WSEAS (07):
  • [42] CHIMERA: A 0.92-TOPS, 2.2-TOPS/W Edge AI Accelerator With 2-MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference
    Prabhu, Kartik
    Gural, Albert
    Khan, Zainab F.
    Radway, Robert M.
    Giordano, Massimo
    Koul, Kalhan
    Doshi, Rohan
    Kustin, John W.
    Liu, Timothy
    Lopes, Gregorio B.
    Turbiner, Victor
    Khwa, Win-San
    Chih, Yu-Der
    Chang, Meng-Fan
    Lallement, Guenole
    Murmann, Boris
    Mitra, Subhasish
    Raina, Priyanka
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2022, 57 (04) : 1013 - 1026
  • [43] A reservoir computing model based on nonlinear spiking neural P systems for time series forecasting ☆
    Long, Lifan
    Guo, Chenggang
    Xiong, Xin
    Peng, Hong
    Wang, Jun
    APPLIED SOFT COMPUTING, 2024, 159
  • [44] Coherent all Optical Reservoir Computing for Equalization of Impairments in Coherent Fiber Optic Communication Systems
    Kumar, Shiva
    Maghrabi, Mahmoud M. T.
    Bakr, Mohamed H.
    Hirooka, Toshihiko
    Nakazawa, Masataka
    IEEE PHOTONICS JOURNAL, 2024, 16 (05):
  • [45] Ghost Reservoir: A Memory-Efficient Low-Power and Real-Time Neuromorphic Processor of Liquid State Machine With On-Chip Learning
    Shi, Cong
    Fu, Xiang
    Wang, Haibing
    Lin, Yingcheng
    Jiang, Ying
    Liu, Liyuan
    Wu, Nanjian
    Tian, Min
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (10) : 4526 - 4530
  • [46] On the Challenge of Training Small Scale Neural Networks on Large Scale Computing Systems
    Malysiak, Darius
    Grimm, Matthias
    2015 16TH IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND INFORMATICS (CINTI), 2015, : 273 - 284
  • [47] Deep Reinforcement Learning Enabled Self-Configurable Networks-on-Chip for High-Performance and Energy-Efficient Computing Systems
    Reza, Md Farhadur
    IEEE ACCESS, 2022, 10 : 65339 - 65354
  • [48] Spatial distribution of information effective for logic function learning in spin-wave reservoir computing chip utilizing spatiotemporal physical dynamics
    Ichimura, Takehiro
    Nakane, Ryosho
    Tanaka, Gouhei
    Hirose, Akira
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [49] Deep Reservoir Computing Meets 5G MIMO-OFDM Systems in Symbol Detection
    Zhou, Zhou
    Liu, Lingjia
    Chandrasekhar, Vikram
    Zhang, Jianzhong
    Yi, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1266 - 1273
  • [50] Resist: Robust Network Training for Memristive Crossbar-Based Neuromorphic Computing Systems
    Bi, Yongtian
    Xu, Qi
    Geng, Hao
    Chen, Song
    Kang, Yi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (06) : 2221 - 2225