Adding learning to cellular genetic algorithms for training recurrent neural networks

被引:27
|
作者
Ku, KWC [1 ]
Mak, MW [1 ]
Siu, WC [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1999年 / 10卷 / 02期
关键词
Baldwin effect; genetic algorithms; Lamarckian learning; real-time recurrent learning; recurrent neural networks;
D O I
10.1109/72.750546
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's), Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared, Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning, The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.
引用
收藏
页码:239 / 252
页数:14
相关论文
共 50 条
  • [1] Rates of learning in gradient and genetic training of recurrent neural networks
    Riaza, R
    Zufiria, PJ
    ARTIFICIAL NEURAL NETS AND GENETIC ALGORITHMS, 1999, : 95 - 99
  • [2] Training of multilayer perceptron neural networks by using cellular genetic algorithms
    Orozco-Monteagudo, M.
    Taboada-Crispi, A.
    Del Toro-Almenares, A.
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, PROCEEDINGS, 2006, 4225 : 389 - 398
  • [3] A unified framework of online learning algorithms for training recurrent neural networks
    Marschall, Owen
    Cho, Kyunghyun
    Savin, Cristina
    Journal of Machine Learning Research, 2020, 21
  • [4] A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks
    Marschall, Owen
    Cho, Kyunghyun
    Savin, Cristina
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [5] Learning algorithms for cellular neural networks
    Mirzai, B
    Cheng, ZL
    Moschytz, GS
    ISCAS '98 - PROCEEDINGS OF THE 1998 INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-6, 1998, : B159 - B162
  • [6] Neural networks training using genetic algorithms
    Chen, MS
    Liao, FH
    1998 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5, 1998, : 2436 - 2441
  • [7] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [8] Coevolution in recurrent neural networks using genetic algorithms
    Sato, Y
    Furuya, T
    SYSTEMS AND COMPUTERS IN JAPAN, 1996, 27 (05) : 64 - 73
  • [9] Learning algorithms and the shape of the learning surface in recurrent neural networks
    Watanabe, Tatsumi
    Uchikawa, Yoshiki
    Gouhara, Kazutoshi
    Systems and Computers in Japan, 1992, 23 (13): : 90 - 107
  • [10] TRAINING PRODUCT UNIT NEURAL NETWORKS WITH GENETIC ALGORITHMS
    JANSON, DJ
    FRENZEL, JF
    IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1993, 8 (05): : 26 - 33