Solving the linear interval tolerance problem for weight initialization of neural networks

被引:29
|
作者
Adam, S. P. [1 ,2 ]
Karras, D. A. [3 ]
Magoulas, G. D. [4 ]
Vrahatis, M. N. [1 ]
机构
[1] Univ Patras, Dept Math, Computat Intelligence Lab, GR-26110 Patras, Greece
[2] Technol Educ Inst Epirus, Dept Comp Engn, Arta 47100, Greece
[3] Technol Educ Inst Sterea Hellas, Dept Automat, Psahna 34400, Evia, Greece
[4] Univ London, Birkbeck Coll, Dept Comp Sci & Informat Syst, London WC1E 7HX, England
关键词
Neural networks; Weight initialization; Interval analysis; Linear interval tolerance problem; FEEDFORWARD NETWORKS; STATISTICAL TESTS; TRAINING SPEED; HIGH-DIMENSION; BACKPROPAGATION; ALGORITHM; INTELLIGENCE;
D O I
10.1016/j.neunet.2014.02.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Determining good initial conditions for an algorithm used to train a neural network is considered a parameter estimation problem dealing with uncertainty about the initial weights. Interval analysis approaches model uncertainty in parameter estimation problems using intervals and formulating tolerance problems. Solving a tolerance problem is defining lower and upper bounds of the intervals so that the system functionality is guaranteed within predefined limits. The aim of this paper is to show how the problem of determining the initial weight intervals of a neural network can be defined in terms of solving a linear interval tolerance problem. The proposed linear interval tolerance approach copes with uncertainty about the initial weights without any previous knowledge or specific assumptions on the input data as required by approaches such as fuzzy sets or rough sets. The proposed method is tested on a number of well known benchmarks for neural networks trained with the back-propagation family of algorithms. Its efficiency is evaluated with regards to standard performance measures and the results obtained are compared against results of a number of well known and established initialization methods. These results provide credible evidence that the proposed method outperforms classical weight initialization methods. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:17 / 37
页数:21
相关论文
共 50 条
  • [1] A review on weight initialization strategies for neural networks
    Narkhede, Meenal V.
    Bartakke, Prashant P.
    Sutaone, Mukul S.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (01) : 291 - 322
  • [2] A weight initialization based on the linear product structure for neural networks
    Chen, Qipin
    Hao, Wenrui
    He, Juncai
    APPLIED MATHEMATICS AND COMPUTATION, 2022, 415
  • [3] Mutual information based weight initialization method for sigmoidal feedforward neural networks
    Qiao, Junfei
    Li, Sanyi
    Li, Wenjing
    NEUROCOMPUTING, 2016, 207 : 676 - 683
  • [4] Interval Based Weight Initialization Method for Sigmoidal Feedforward Artificial Neural Networks
    Sodhi, Sartaj Singh
    Chandra, Pravin
    2ND AASRI CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND BIOINFORMATICS, 2014, 6 : 19 - 25
  • [5] An overview on weight initialization methods for feedforward neural networks
    de Sousa, Celso A. R.
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 52 - 59
  • [6] A review on weight initialization strategies for neural networks
    Meenal V. Narkhede
    Prashant P. Bartakke
    Mukul S. Sutaone
    Artificial Intelligence Review, 2022, 55 : 291 - 322
  • [7] Unsupervised neural networks for solving Troesch's problem
    Raja, Muhammad Asif Zahoor
    CHINESE PHYSICS B, 2014, 23 (01)
  • [8] Data-driven weight initialization strategy for convolutional neural networks
    Narkhede, Meenal
    Mahajan, Shrinivas
    Bartakke, Prashant
    EVOLUTIONARY INTELLIGENCE, 2025, 18 (01)
  • [9] Weight initialization of feedforward neural networks by means of partial least squares
    Liu, Yan
    Zhou, Chang-Feng
    Chen, Ying-Wu
    PROCEEDINGS OF 2006 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2006, : 3119 - +
  • [10] A New Weight Initialization Method for Sigmoidal Feedforward Artificial Neural Networks
    Sodhi, Sartaj Singh
    Chandra, Pravin
    Tanwar, Sharad
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 291 - 298