Genetic Programming discovers efficient learning rules for the hidden and output layers of feedforward neural networks

被引:0
作者
Radi, A [1 ]
Poli, R [1 ]
机构
[1] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
来源
GENETIC PROGRAMMING | 1999年 / 1598卷
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The learning method is critical for obtaining good generalisation in neural networks with limited training data. The Standard BackPropagation (SBP) training algorithm suffers from several problems such as sensitivity to the initial conditions and very slow convergence. The aim of this work is to use Genetic Programming; (GP) to discover new supervised learning algorithms which can overcome some of these problems. In previous research a new learning algorithms for the output layer has been discovered using GP. By comparing this with SEP on different problems better performance was demonstrated. This paper shows that GP can also discover better learning algorithms for the hidden layers to be used in conjunction with the algorithm previously discovered. Comparing these with SEP on different problems we show they provide better performances. This study indicates that there exist many supervised learning algorithms better than SEP and that GP can be used to discover them.
引用
收藏
页码:120 / 134
页数:15
相关论文
共 57 条
  • [1] AHMAD R, 1992, P INT JOINT C NEUR N
  • [2] AN IMPROVED ALGORITHM FOR NEURAL-NETWORK CLASSIFICATION OF IMBALANCED TRAINING SETS
    ANAND, R
    MEHROTRA, KG
    MOHAN, CK
    RANKA, S
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (06): : 962 - 969
  • [3] [Anonymous], 1993, PROMISE NEURAL NETWO
  • [4] GRADIENT DESCENT LEARNING ALGORITHM OVERVIEW - A GENERAL DYNAMICAL-SYSTEMS PERSPECTIVE
    BALDI, P
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (01): : 182 - 195
  • [5] Bengio S., 1994, Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (Cat. No.94TH0650-2), P324, DOI 10.1109/ICEC.1994.349932
  • [6] CHALMERS DJ, 1990, CONNECTIONIST MODELS
  • [7] Cherkassky V, 1998, IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, P2258, DOI 10.1109/IJCNN.1998.687212
  • [8] INITIALIZING BACK PROPAGATION NETWORKS WITH PROTOTYPES
    DENOEUX, T
    LENGELLE, R
    [J]. NEURAL NETWORKS, 1993, 6 (03) : 351 - 363
  • [9] DETERDING DH, 1989, THESIS U CAMBRIDGE
  • [10] LEARNING COEFFICIENT DEPENDENCE ON TRAINING SET SIZE
    EATON, HAC
    OLIVIER, TL
    [J]. NEURAL NETWORKS, 1992, 5 (02) : 283 - 288