HOW TO MAKE SIGMA-PI NEURAL NETWORKS PERFORM PERFECTLY ON REGULAR TRAINING SETS

被引:13
作者
LENZE, B
机构
[1] Fachhochschule Dortmund, Dortmund, Germany
关键词
FEEDFORWARD SIGMA-PI NEURAL NETWORKS; HYPERBOLIC CARDINAL TRANSLATION-TYPE INTERPOLATION OPERATORS; PARALLEL SAMPLING; REAL-TIME UPDATE; ONE-SHOT LEARNING SCHEME; B-SPLINES; XOR-PROBLEM; MULTIGROUP DISCRIMINANT PROBLEMS;
D O I
10.1016/0893-6080(94)90009-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we show how to design three-layer feedforward neural networks with sigma-pi units in the hidden layer in order to perform perfectly on regular training sets. We obtain real-time design schemes based on massively parallel sampling and induced by so-called hyperbolic cardinal translation-type interpolation operators. The real-time nature of our strategy is due to the fact that in the neural network language our approach is nothing else but a very general and efficient one-shot learning scheme. Moreover, because of the very special hyperbolic structure of our sigma-pi units we do not have the usual dramatic increase of parameters and weights that in general happens in case of higher order networks. The final networks are of manageable complexity and may be applied to multigroup discriminant problems, pattern recognition, and image processing. In detail, the XOR-problem and a special multigroup discriminant problem are discussed at the end of the paper.
引用
收藏
页码:1285 / 1293
页数:9
相关论文
共 24 条
  • [1] Chui C K, 1993, MULTIVARIATE APPROXI, P77
  • [2] APPROXIMATION BY RIDGE FUNCTIONS AND NEURAL NETWORKS WITH ONE HIDDEN LAYER
    CHUI, CK
    LI, X
    [J]. JOURNAL OF APPROXIMATION THEORY, 1992, 70 (02) : 131 - 141
  • [3] GENERALIZATION IN PROBABILISTIC RAM NETS
    CLARKSON, TG
    GUAN, Y
    TAYLOR, JG
    GORSE, D
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02): : 360 - 363
  • [4] Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
  • [5] ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS
    FUNAHASHI, K
    [J]. NEURAL NETWORKS, 1989, 2 (03) : 183 - 192
  • [6] Giles C. L., 1988, NEURAL INFORMATION P, P301
  • [7] LEARNING, INVARIANCE, AND GENERALIZATION IN HIGH-ORDER NEURAL NETWORKS
    GILES, CL
    MAXWELL, T
    [J]. APPLIED OPTICS, 1987, 26 (23): : 4972 - 4978
  • [8] A CONTINUOUS INPUT RAM-BASED STOCHASTIC NEURAL MODEL
    GORSE, D
    TAYLOR, JG
    [J]. NEURAL NETWORKS, 1991, 4 (05) : 657 - 665
  • [9] GORSE D, 1993, INT JOINT C NEURAL N
  • [10] Hecht-Nielsen R., 1990, NEUROCOMPUTING