Improved GWO for large-scale function optimization and MLP optimization in cancer identification

被引:34
作者
Zhang, Xinming [1 ]
Wang, Xia [1 ]
Chen, Haiyan [2 ]
Wang, Doudou [1 ]
Fu, Zihao [1 ]
机构
[1] Henan Normal Univ, Coll Comp & Informat Engn, Xinxiang 453007, Henan, Peoples R China
[2] Hubei Canc Hosp, Dept Gynaecol Tumour, Wuhan 430079, Peoples R China
关键词
Intelligent optimization algorithm; Grey wolf optimizer (GWO); Opposition learning; Large scale; Cancer identification; PARTICLE SWARM OPTIMIZATION; GREY WOLF OPTIMIZER; BIOGEOGRAPHY-BASED OPTIMIZATION; ANT COLONY OPTIMIZATION; LEVY FLIGHT; KRILL HERD; ALGORITHM; EVOLUTIONARY; PERFORMANCE; SEARCH;
D O I
10.1007/s00521-019-04483-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Grey wolf optimizer (GWO) is a novel nature-inspired algorithm, and it has the characteristics of strong local search ability but weak global search ability when dealing with some large-scale problems. So a GWO based on random opposition learning, strengthening hierarchy of grey wolves and modified evolutionary population dynamics (EPD), named as RSMGWO, is proposed. Firstly, a search way based on strengthening hierarchy of grey wolves is added; each grey wolf uses two kinds of updating modes, including a global-best search way based on random dimensions and the original search way of GWO, to improve the optimization performance. Secondly, a modified EPD is embedded to improve the optimization performance further. Finally, a random opposition learning strategy is merged to avoid falling into local optima. Experimental results on 19 different (especially large scale) dimensional benchmark functions and multi-layer perceptron (MLP) optimization for cancer identification show that compared with GWO and quite a few state-of-the-art algorithms, RSMGWO is able to provide more competitive results, in terms of both accuracy and convergence.
引用
收藏
页码:1305 / 1325
页数:21
相关论文
共 48 条
[11]   Benchmarking optimization software with performance profiles [J].
Dolan, ED ;
Moré, JJ .
MATHEMATICAL PROGRAMMING, 2002, 91 (02) :201-213
[12]   Opposition-based particle swarm optimization with adaptive mutation strategy [J].
Dong, Wenyong ;
Kang, Lanlan ;
Zhang, Wensheng .
SOFT COMPUTING, 2017, 21 (17) :5081-5090
[13]   A sinusoidal differential evolution algorithm for numerical optimisation [J].
Draa, Amer ;
Bouzoubia, Samira ;
Boukhalfa, Imene .
APPLIED SOFT COMPUTING, 2015, 27 :99-126
[14]   Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks [J].
Emary, E. ;
Zawbaa, Hossam M. ;
Grosan, Crina .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (03) :681-694
[15]   A novel hybrid multi-objective artificial bee colony algorithm for blocking lot-streaming flow shop scheduling problems [J].
Gong, Dunwei ;
Han, Yuyan ;
Sun, Jianyong .
KNOWLEDGE-BASED SYSTEMS, 2018, 148 :115-130
[16]   A novel Random Walk Grey Wolf Optimizer [J].
Gupta, Shubham ;
Deep, Kusum .
SWARM AND EVOLUTIONARY COMPUTATION, 2019, 44 :101-112
[17]   A Grey Wolf Optimizer-based neural network coupled with response surface method for modeling the strength of siro-spun yarn in spinning mills [J].
Hadavandi, Esmaeil ;
Mostafayi, Sobhan ;
Soltani, Parham .
APPLIED SOFT COMPUTING, 2018, 72 :1-13
[18]   An efficient modified grey wolf optimizer with Levy flight for optimization tasks [J].
Heidari, Ali Asghar ;
Pahlavani, Parham .
APPLIED SOFT COMPUTING, 2017, 60 :115-134
[19]   An enhanced particle swarm optimization with levy flight for global optimization [J].
Jensi, R. ;
Jiji, G. Wiselin .
APPLIED SOFT COMPUTING, 2016, 43 :248-261
[20]   On the performance of artificial bee colony (ABC) algorithm [J].
Karaboga, D. ;
Basturk, B. .
Applied Soft Computing Journal, 2008, 8 (01) :687-697