A very fast learning method for neural networks based on sensitivity analysis

被引:0
|
作者
Castillo, Enrique [1 ]
Guijarro-Berdinas, Bertha
Fontenla-Romero, Oscar
Alonso-Betanzos, Amparo
机构
[1] Univ Cantabria, Dept Appl Math & Computat Sci, E-39005 Santander, Spain
[2] Univ Castilla La Mancha, Santander 39005, Spain
[3] Univ A Coruna, Fac Informat, Dept Comp Sci, La Coruna 15071, Spain
关键词
supervised learning; neural networks; linear optimization; least-squares; initialization method; sensitivity analysis;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements.
引用
收藏
页码:1159 / 1182
页数:24
相关论文
共 50 条
  • [1] Fast learning method for RAAM based on sensitivity analysis
    Barcz, A.
    PHOTONICS APPLICATIONS IN ASTRONOMY, COMMUNICATIONS, INDUSTRY, AND HIGH-ENERGY PHYSICS EXPERIMENTS 2014, 2014, 9290
  • [2] Structure Optimization of BP Neural Networks Based on Sensitivity Analysis
    Zhao, Jian
    Shen, Yunzhong
    2011 INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND NEURAL COMPUTING (FSNC 2011), VOL V, 2011, : 531 - 535
  • [3] Structure Optimization of BP Neural Networks Based on Sensitivity Analysis
    Zhao, Jian
    Shen, Yunzhong
    2011 AASRI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INDUSTRY APPLICATION (AASRI-AIIA 2011), VOL 1, 2011, : 77 - 81
  • [4] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    FUNDAMENTA INFORMATICAE, 2001, 46 (03) : 219 - 252
  • [5] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    FUNDAMENTA INFORMATICAE, 2001, 45 (04) : 295 - 328
  • [6] Variance decomposition-based sensitivity analysis via neural networks
    Marseguerra, M
    Masini, R
    Zio, E
    Cojazzi, G
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2003, 79 (02) : 229 - 238
  • [7] Fast Learning Architecture for Neural Networks
    Zhang Ming Jun
    Garcia, Samuel
    Terre, Michel
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 1611 - 1615
  • [8] Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates
    Smith, Leslie N.
    Topin, Nicholay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [9] OPTIMAL FILTERING ALGORITHMS FOR FAST LEARNING IN FEEDFORWARD NEURAL NETWORKS
    SHAH, S
    PALMIERI, F
    DATUM, M
    NEURAL NETWORKS, 1992, 5 (05) : 779 - 787
  • [10] Learning Automata Based Incremental Learning Method for Deep Neural Networks
    Guo, Haonan
    Wang, Shilin
    Fan, Jianxun
    Li, Shenghong
    IEEE ACCESS, 2019, 7 (41164-41171) : 41164 - 41171