Taming the Reservoir: Feedforward Training for Recurrent Neural Networks

被引:0
|
作者
Obst, Oliver [1 ]
Riedmiller, Martin [2 ]
机构
[1] Adapt Syst, ICT Ctr, Commonwealth Sci & Ind Res Org, Sydney, NSW 2122, Australia
[2] Univ Freiburg, Dept Comp Sci, Machine Learning Lab, D-79110 Freiburg, Germany
来源
2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2012年
关键词
PLASTICITY; MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks are successfully used for tasks like time series processing and system identification. Many of the approaches to train these networks, however, are often regarded as too slow, too complicated, or both. Reservoir computing methods like echo state networks or liquid state machines are an alternative to the more traditional approaches. Echo state networks have the appeal that they are simple to train, and that they have shown to be able to produce excellent results for a number of benchmarks and other tasks. One disadvantage of echo state networks, however, is the high variability in their performance due to a randomly connected hidden layer. Ideally, an efficient and more deterministic way to create connections in the hidden layer could be found, with a performance better than randomly connected hidden layers but without excessively iterating over the same training data many times. We present an approach - tamed reservoirs - that makes use of efficient feedforward training methods, and performs better than echo state networks for some time series prediction tasks. Moreover, our approach reduces some of the variability since all recurrent connections in the network are trained.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] ON TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    PRAMANA-JOURNAL OF PHYSICS, 1993, 40 (01): : 35 - 42
  • [2] A constrained optimization algorithm for training locally recurrent globally feedforward neural networks
    Mastorocostas, PA
    Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vols 1-5, 2005, : 717 - 722
  • [4] Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks
    Liu Y.
    Yang P.-F.
    Zhang L.-J.
    Wu Z.-L.
    Feng Y.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (07): : 1 - 33
  • [5] A smoothing regularizer for feedforward and recurrent neural networks
    Wu, LZ
    Moody, J
    NEURAL COMPUTATION, 1996, 8 (03) : 461 - 489
  • [6] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [7] RECURRENT NEURAL-NETWORK TRAINING WITH FEEDFORWARD COMPLEXITY
    OLUROTIMI, O
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 185 - 197
  • [8] A hybrid training algorithm for feedforward neural networks
    Ben Nasr, Mounir
    Chtourou, Mohamed
    NEURAL PROCESSING LETTERS, 2006, 24 (02) : 107 - 117
  • [9] A TRAINING ALGORITHM FOR BINARY FEEDFORWARD NEURAL NETWORKS
    GRAY, DL
    MICHEL, AN
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1992, 3 (02): : 176 - 194
  • [10] Training feedforward neural networks with gain constraints
    Hartman, E
    NEURAL COMPUTATION, 2000, 12 (04) : 811 - 829