Some new neural network architectures with improved learning schemes

被引:12
|
作者
Sinha M. [1 ]
Kumar K. [1 ]
Kalra P.K. [2 ]
机构
[1] Department of Aerospace Engineering, IIT Kanpur, Kanpur
[2] Department of Electrical Engineering, IIT Kanpur, Kanpur
关键词
Compensatory architecture; Complex backpropagation; Higher order neurons; Lambda-gamma learning algorithm; Self-scaling scaled conjugate gradient algorithm; Sigma-pi-sigma architecture;
D O I
10.1007/s005000000057
中图分类号
学科分类号
摘要
Here, we present two new neuron model architectures and one modified form of existing standard feedforward architecture (MSTD). Both the new models use self-scaling scaled conjugate gradient algorithm (SSCGA) and lambda-gamma (L-G) algorithm and entail the properties of basic as well as higher order neurons (i.e., multiplication and the aggregation functions). Of these two, compensatory neural network architecture (CNNA) requires relatively smaller number of inter-neuronal connections, cuts down on the computational budget by almost 50% and speeds up convergence, besides, gives better training and prediction accuracy. The second model sigma-pi-sigma (SPS) ensures faster convergence, better training and prediction accuracy. The third model (MSTD) performs much better than the standard feedforward architecture (STD). The effect of normalizing the outputs for training also studied here shows virtually no improvement, at low iteration level, say ~500, with increasing range of scaling. Increasing the number of neurons beyond a point also shows to have little effect in the case of higher order neuron.The numerous simulation runs for the problem of satellite orbit determination and the complex XOR problems establishes the robustness of the proposed neuron models architectures. © Springer-Verlag 2000.
引用
收藏
页码:214 / 223
页数:9
相关论文
共 50 条
  • [1] Evolving improved incremental learning schemes for neural network systems
    Seipone, T
    Bullinaria, JA
    2005 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-3, PROCEEDINGS, 2005, : 2002 - 2009
  • [2] Neural network Architectures and learning
    Wilamowski, BM
    2003 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS 1 AND 2, PROCEEDINGS, 2003, : TU1 - TU12
  • [3] Neural network with deep learning architectures
    Patel, Hima
    Thakkar, Amit
    Pandya, Mrudang
    Makwana, Kamlesh
    JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES, 2018, 39 (01): : 31 - 38
  • [4] Bayesian Learning of Neural Network Architectures
    Dikov, Georgi
    Bayer, Justin
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 730 - 738
  • [5] Neural Network Architectures and Learning Algorithms
    Wilamowski, Bogdan M.
    IEEE INDUSTRIAL ELECTRONICS MAGAZINE, 2009, 3 (04) : 56 - 63
  • [6] A Survey on Recurrent Neural Network Architectures for Sequential Learning
    Prakash, B. Shiva
    Sanjeev, K., V
    Prakash, Ramesh
    Chandrasekaran, K.
    SOFT COMPUTING FOR PROBLEM SOLVING, 2019, 817 : 57 - 66
  • [7] Improved neural network for SVM learning
    Anguita, D
    Boni, A
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (05): : 1243 - 1244
  • [8] Generating new patterns for information gain and improved neural network learning
    Viktor, HL
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL IV, 2000, : 529 - 534
  • [9] Adaptive learning schemes for the modified probabilistic neural network
    Zaknich, A
    Desilva, CJS
    ICA(3)PP 97 - 1997 3RD INTERNATIONAL CONFERENCE ON ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, 1997, : 597 - 610
  • [10] Critique of some neural network architectures and claims for control and estimation
    Kerr, TH
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 1998, 34 (02) : 406 - 419