Let a biogeography-based optimizer train your Multi-Layer Perceptron

被引:246
作者
Mirjalili, Seyedali [1 ]
Mirjalili, Seyed Mohammad [2 ]
Lewis, Andrew [1 ]
机构
[1] Griffith Univ, Sch Informat & Commun Technol, Brisbane, Qld 4111, Australia
[2] ZPS Co, Tehran, Iran
关键词
FNN; Neural network; Learning neural network; Biogeography-Based Optimization; BBO; Evolutionary algorithm; PARTICLE SWARM OPTIMIZATION; ARTIFICIAL NEURAL-NETWORKS; EXTREME LEARNING-MACHINE; KRILL HERD ALGORITHM; CONVERGENCE; RATES; SELECTION;
D O I
10.1016/j.ins.2014.01.038
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Multi-Lager Perceptron (MLP), as one of the most-widely used Neural Networks (NNs), has been applied to many practical problems. The MLP requires training on specific applications, often experiencing problems of entrapment in local minima, convergence speed, and sensitivity to initialization. This paper proposes the use of the recently developed Biogeography-Based Optimization (BBO) algorithm for training MLPs to reduce these problems. In order to investigate the efficiencies of BBO in training MLPs, five classification datasets, as well as six function approximation datasets are employed. The results are compared to five well-known heuristic algorithms, Back Propagation (BP), and Extreme Learning Machine (ELM) in terms of entrapment in local minima, result accuracy, and convergence rate. The results show that training MLPs by using BBO is significantly better than the current heuristic learning algorithms and BP. Moreover, the results show that BBO is able to provide very competitive results in comparison with ELM. (C) 2014 Elsevier Inc. All rights reserved.
引用
收藏
页码:188 / 209
页数:22
相关论文
共 80 条
[1]  
Abedifar V., 2013, 2013 21 IR C EL ENG, P1
[2]   AN ADAPTIVE CONJUGATE-GRADIENT LEARNING ALGORITHM FOR EFFICIENT TRAINING OF NEURAL NETWORKS [J].
ADELI, H ;
HUNG, SL .
APPLIED MATHEMATICS AND COMPUTATION, 1994, 62 (01) :81-102
[3]  
[Anonymous], 1999, FEEDFORWARD NEURAL N
[4]  
[Anonymous], 1974, Ph.D. Thesis
[5]  
[Anonymous], 1993, INT C ART NEUR NETW
[6]  
[Anonymous], 2012, PHOT GLOB C PGC
[7]   A learning rule for very simple universal approximators consisting of a single layer of perceptrons [J].
Auer, Peter ;
Burgsteiner, Harald ;
Maass, Wolfgang .
NEURAL NETWORKS, 2008, 21 (05) :786-795
[8]   Parameter selection algorithm with self adaptive growing neural network classifier for diagnosis issues [J].
Barakat, M. ;
Lefebvre, D. ;
Khalil, M. ;
Druaux, F. ;
Mustapha, O. .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2013, 4 (03) :217-233
[9]  
Bell AJ, 1999, COMPUT NEUR MIT, P145
[10]  
Blake C. L., 1998, Uci repository of machine learning databases