Adaptive basis function for artificial neural networks

被引:9
作者
Philip, NS [1 ]
Joseph, KB [1 ]
机构
[1] Cochin Univ Sci & Technol, Dept Phys, Cochin 682022, Kerala, India
关键词
back propagation algorithm; neural networks; optimization of basis function;
D O I
10.1016/S0925-2312(01)00578-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is shown that modifying the sigmoidal basis function of a multi-layer feedforward artificial neural network using a control parameter improves the network's ability to learn. The modification is rendered by a gradient descent algorithm similar to the back-propagation. In doing so, the method retains all the goodies of the sigmoidal function and causes the ANN to approximate the decision function faster and also with better accuracy. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:21 / 34
页数:14
相关论文
共 10 条
  • [1] DORFFNER G, 1995, M011995 TU COTTB, P34
  • [2] ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS
    FUNAHASHI, K
    [J]. NEURAL NETWORKS, 1989, 2 (03) : 183 - 192
  • [3] Layered Neural Networks with Gaussian Hidden Units as Universal Approximations
    Hartman, Eric J.
    Keeler, James D.
    Kowalski, Jacek M.
    [J]. NEURAL COMPUTATION, 1990, 2 (02) : 210 - 215
  • [4] MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS
    HORNIK, K
    STINCHCOMBE, M
    WHITE, H
    [J]. NEURAL NETWORKS, 1989, 2 (05) : 359 - 366
  • [5] KRAAIJVELD MA, 1993, SMALL SAMPLE BEHAV M
  • [6] McCullagh P., 2019, Generalized Linear Models
  • [7] Mehrotra K., 1997, ELEMENTS ARTIFICIAL
  • [8] Universal Approximation Using Radial-Basis-Function Networks
    Park, J.
    Sandberg, I. W.
    [J]. NEURAL COMPUTATION, 1991, 3 (02) : 246 - 257
  • [9] Philip DW, 1993, ADV METHODS NEURAL C
  • [10] Rumelhart D. E., 1986, PARALLEL DISTRIBUTED, V1