Improved monarch butterfly optimization for unconstrained global search and neural network training

被引:85
作者
Faris, Hossam [1 ]
Aljarah, Ibrahim [1 ]
Mirjalili, Seyedali [2 ]
机构
[1] Univ Jordan, King Abdullah Sch Informat Technol 2, Business Informat Technol Dept, Amman, Jordan
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld 4111, Australia
关键词
MBO; Global optimization; Multilayer perceptron; Neural network; Optimization; GENETIC ALGORITHM; BACKPROPAGATION;
D O I
10.1007/s10489-017-0967-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.
引用
收藏
页码:445 / 464
页数:20
相关论文
共 52 条
[31]   How effective is the Grey Wolf optimizer in training multi-layer perceptrons [J].
Mirjalili, Seyedali .
APPLIED INTELLIGENCE, 2015, 43 (01) :150-161
[32]   Let a biogeography-based optimizer train your Multi-Layer Perceptron [J].
Mirjalili, Seyedali ;
Mirjalili, Seyed Mohammad ;
Lewis, Andrew .
INFORMATION SCIENCES, 2014, 269 :188-209
[33]  
Molga M., 2005, TEST FUNCTIONS OPTIM, V101, P01
[34]  
Price K., 2005, NAT COMP SER, DOI 10.1007/3-540-31306-0
[35]  
Rumelhart DE, 1985, LEARNING INTERNAL RE, DOI [DOI 10.7551/MITPRESS/5236.001.0001, 10.1016/b978-1-4832-1446-7.50035-2]
[36]   Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation [J].
Sexton, RS ;
Dorsey, RE ;
Johnson, JD .
DECISION SUPPORT SYSTEMS, 1998, 22 (02) :171-185
[37]   Comparative evaluation of genetic algorithm and backpropagation for training neural networks [J].
Sexton, RS ;
Gupta, JND .
INFORMATION SCIENCES, 2000, 129 (1-4) :45-59
[38]  
Siddique MNH, 2001, P INT JOINT C NEUR N, V4
[39]   Biogeography-Based Optimization [J].
Simon, Dan .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2008, 12 (06) :702-713
[40]   Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces [J].
Storn, R ;
Price, K .
JOURNAL OF GLOBAL OPTIMIZATION, 1997, 11 (04) :341-359