Comparison of stochastic global optimization methods to estimate neural network weights

被引:42
作者
Hamm, Lonnie
Brorsen, B. Wade
Hagan, Martin T.
机构
[1] Oklahoma State Univ, Dept Agr Econ, Stillwater, OK 74078 USA
[2] Straumur Burdaras Investment Bank, London, England
[3] Oklahoma State Univ, Sch Elect & Comp Engn, Stillwater, OK 74078 USA
关键词
evolutionary algorithms; function approximation; neural networks; simulated annealing; stochastic global optimization;
D O I
10.1007/s11063-007-9048-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.
引用
收藏
页码:145 / 158
页数:14
相关论文
共 31 条
[1]  
[Anonymous], 2004, LECT NOTES COMPUT SC
[2]  
[Anonymous], 1995, Evolution and Optimum Seeking", Ed
[3]   A HYBRID ALGORITHM FOR FINDING THE GLOBAL MINIMUM OF ERROR FUNCTION OF NEURAL NETWORKS AND ITS APPLICATIONS [J].
BABA, N ;
MOGAMI, Y ;
KOHZAKI, M ;
SHIRAISHI, Y ;
YOSHIDA, Y .
NEURAL NETWORKS, 1994, 7 (08) :1253-1265
[4]   Co-evolving neural networks with evolutionary strategies: a new application to Divisia money [J].
Binner, JM ;
Kendall, G ;
Gazely, A .
APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN FINANCE AND ECONOMICS, 2004, 19 :127-143
[5]  
Boender C. G. E., 1995, Handbook of global optimization, P829
[6]  
Davis L., 1991, HDB GENETIC ALGORITH
[7]   A global search procedure for parameter estimation in neural spatial interaction modelling [J].
Fischer, MM ;
Hlavácková-Schindler, K ;
Reismann, M .
PAPERS IN REGIONAL SCIENCE, 1999, 78 (02) :119-134
[8]  
Frances P. H., 2000, NONLINEAR TIME SERIE
[9]   ON LEARNING THE DERIVATIVES OF AN UNKNOWN MAPPING WITH MULTILAYER FEEDFORWARD NETWORKS [J].
GALLANT, AR ;
WHITE, H .
NEURAL NETWORKS, 1992, 5 (01) :129-138
[10]  
GEORGIEVA A, 2006, 2006 INT JOINT C NEU