Taming the Reservoir: Feedforward Training for Recurrent Neural Networks

被引:0
|
作者
Obst, Oliver [1 ]
Riedmiller, Martin [2 ]
机构
[1] Adapt Syst, ICT Ctr, Commonwealth Sci & Ind Res Org, Sydney, NSW 2122, Australia
[2] Univ Freiburg, Dept Comp Sci, Machine Learning Lab, D-79110 Freiburg, Germany
来源
2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2012年
关键词
PLASTICITY; MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks are successfully used for tasks like time series processing and system identification. Many of the approaches to train these networks, however, are often regarded as too slow, too complicated, or both. Reservoir computing methods like echo state networks or liquid state machines are an alternative to the more traditional approaches. Echo state networks have the appeal that they are simple to train, and that they have shown to be able to produce excellent results for a number of benchmarks and other tasks. One disadvantage of echo state networks, however, is the high variability in their performance due to a randomly connected hidden layer. Ideally, an efficient and more deterministic way to create connections in the hidden layer could be found, with a performance better than randomly connected hidden layers but without excessively iterating over the same training data many times. We present an approach - tamed reservoirs - that makes use of efficient feedforward training methods, and performs better than echo state networks for some time series prediction tasks. Moreover, our approach reduces some of the variability since all recurrent connections in the network are trained.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] From feedforward to recurrent LSTM neural networks for language modeling
    Sundermeyer, Martin
    Ney, Hermann
    Schluter, Ralf
    IEEE Transactions on Audio, Speech and Language Processing, 2015, 23 (03): : 517 - 529
  • [42] Training of a class of recurrent neural networks
    Shaaban, EM
    ISCAS '98 - PROCEEDINGS OF THE 1998 INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-6, 1998, : B78 - B81
  • [43] Reservoir computing approaches to recurrent neural network training
    Lukosevicius, Mantas
    Jaeger, Herbert
    COMPUTER SCIENCE REVIEW, 2009, 3 (03) : 127 - 149
  • [44] Robust timing and motor patterns by taming chaos in recurrent neural networks
    Rodrigo Laje
    Dean V Buonomano
    Nature Neuroscience, 2013, 16 : 925 - 933
  • [45] Robust timing and motor patterns by taming chaos in recurrent neural networks
    Laje, Rodrigo
    Buonomano, Dean V.
    NATURE NEUROSCIENCE, 2013, 16 (07) : 925 - U196
  • [46] Improved Equilibrium Optimizer for Accurate Training of Feedforward Neural Networks
    Mohammadi, Seyed Sina
    Salehirad, Mohammadreza
    Emamzadeh, Mohammad Mollaie
    Yazdi, Mojtaba Barkhordari
    OPTICAL MEMORY AND NEURAL NETWORKS, 2024, 33 (02) : 133 - 143
  • [47] Using spotted hyena optimizer for training feedforward neural networks
    Luo, Qifang
    Li, Jie
    Zhou, Yongquan
    Liao, Ling
    COGNITIVE SYSTEMS RESEARCH, 2021, 65 (65): : 1 - 16
  • [48] An Optimal PID Control Algorithm for Training Feedforward Neural Networks
    Jing, Xingjian
    Cheng, Li
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2013, 60 (06) : 2273 - 2283
  • [49] Feedforward neural networks training with optimal bounded ellipsoid algorithm
    Rubio Avila, Jose De Jesus
    Ramirez, Andres Ferreyra
    Aviles-Cruz, Carlos
    PROCEEDINGS OF THE 9TH WSEAS INTERNATIONAL CONFERENCE ON NEURAL NETWORKS (NN' 08): ADVANCED TOPICS ON NEURAL NETWORKS, 2008, : 174 - 180
  • [50] Constructive training methods for feedforward neural networks with binary weights
    Mayoraz, E
    Aviolat, F
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 1996, 7 (02) : 149 - 166