Hybrid interior point training of modular neural networks

被引:4
作者
Szymanski, PT [1 ]
Lemmon, M [1 ]
Bett, CJ [1 ]
机构
[1] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46556 USA
基金
美国国家科学基金会;
关键词
modular neural networks; training; algorithms; interior-point methods; expectation-maximization methods;
D O I
10.1016/S0893-6080(97)00119-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modular neural networks use a single gating neuron to select the outputs of a collection of agent neurons. Expectation-maximization (EM) algorithms provide one way of training modular neural networks to approximate non-linear functionals. This paper introduces a hybrid interior-point (HIP) algorithm for training modular networks. The HIP algorithm combines an interior-point linear programming (LP) algorithm with a Newton-Raphson iteration in such a way that the computational efficiency of the interior point LP methods is preserved. The algorithm is formally proven to converge asymptotically to locally optimal networks with a total computational cost that scales in a polynomial manner with problem size. Simulation experiments show that the HIP algorithm produces networks whose average approximation error is better than that of EM-trained networks. These results also demonstrate that the computational cost of the HIP algorithm scales at a slower rate than the EM-procedure and that, for small-size networks, the total computational costs of both methods are comparable. (C) 1998 Elsevier Science Ltd. All rights reserved.
引用
收藏
页码:215 / 234
页数:20
相关论文
共 15 条
  • [1] Bazaraa M.S., 2013, Nonlinear Programming-Theory and Algorithms, V3rd
  • [2] Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
  • [3] MAXIMUM LIKELIHOOD FROM INCOMPLETE DATA VIA EM ALGORITHM
    DEMPSTER, AP
    LAIRD, NM
    RUBIN, DB
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1977, 39 (01): : 1 - 38
  • [4] Golub G.H., 1996, Matrix Computations, Vthird
  • [5] PATH-FOLLOWING METHODS FOR LINEAR-PROGRAMMING
    GONZAGA, CC
    [J]. SIAM REVIEW, 1992, 34 (02) : 167 - 224
  • [6] HADAMARD J, 1964, THEORIE EQUATIONS AU
  • [7] MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS
    HORNIK, K
    STINCHCOMBE, M
    WHITE, H
    [J]. NEURAL NETWORKS, 1989, 2 (05) : 359 - 366
  • [8] Adaptive Mixtures of Local Experts
    Jacobs, Robert A.
    Jordan, Michael I.
    Nowlan, Steven J.
    Hinton, Geoffrey E.
    [J]. NEURAL COMPUTATION, 1991, 3 (01) : 79 - 87
  • [9] JORDAN MI, 1993, 9303 MIT COMP COGN S
  • [10] A NEW POLYNOMIAL-TIME ALGORITHM FOR LINEAR-PROGRAMMING
    KARMARKAR, N
    [J]. COMBINATORICA, 1984, 4 (04) : 373 - 395