A variable population size opposition-based learning for differential evolution algorithm and its applications on feature selection

被引:5
作者
Wang, Le [1 ]
Li, Jiahang [2 ]
Yan, Xuefeng [1 ]
机构
[1] East China Univ Sci & Technol, Key Lab Smart Mfg Energy Chem Proc, Shanghai 200237, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Mech Sci & Engn, Wuhan 430074, Peoples R China
关键词
Differential evolution; Parameter control; Opposition-based learning; Feature selection; OPTIMIZATION;
D O I
10.1007/s10489-023-05179-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opposition-based differential evolution (ODE) cannot adaptively adjust the number of individuals partake opposition-based learning, which makes it difficult to solve complex optimization problems. In this manuscript, we present an innovative approach for the treatment of variable population ODE (SASODE) by leveraging on adaptive parameters. The core idea of SASODE is to assign a jumping rate to each individual in the population, which is the key parameter that determines whether an individual enters a subpopulation or not. The initial rate assignment relies on the empirical mean of a normal distribution. During the iterative process, the mean is adjusted adaptively by taking into account the historical information of the individuals retained from the preceding generation. At the same time, the variation of this mean directly lead to changing the jumping rate of individuals and thus to adjusting the subpopulation size. In addition, the constant c and the Lehmer mean together maintain a balance between exploration and exploitation of SASODE. Experimental results show that the algorithm ranks first in the Wilcoxon test on 61 benchmarks and three optimization problems in three dimensions. Then, we confirm that SASODE can achieve an accuracy of 96% or even higher on the feature selection problem. Therefore, SASODE outperforms the other state-of-the-art algorithms compared in terms of convergence rate and accuracy.
引用
收藏
页码:959 / 984
页数:26
相关论文
共 62 条
[1]   Advanced metaheuristic optimization techniques in applications of deep neural networks: a review [J].
Abd Elaziz, Mohamed ;
Dahou, Abdelghani ;
Abualigah, Laith ;
Yu, Liyang ;
Alshinwan, Mohammad ;
Khasawneh, Ahmad M. ;
Lu, Songfeng .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (21) :14079-14099
[2]   Aquila Optimizer: A novel meta-heuristic optimization algorithm [J].
Abualigah, Laith ;
Yousri, Dalia ;
Abd Elaziz, Mohamed ;
Ewees, Ahmed A. ;
Al-qaness, Mohammed A. A. ;
Gandomi, Amir H. .
COMPUTERS & INDUSTRIAL ENGINEERING, 2021, 157 (157)
[3]   RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method [J].
Ahmadianfar, Iman ;
Heidari, Ali Asghar ;
Gandomi, Amir H. ;
Chu, Xuefeng ;
Chen, Huiling .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 181
[4]   Variance-based differential evolution algorithm with an optional crossover for data clustering [J].
Alswaitti, Mohammed ;
Albughdadi, Mohanad ;
Isa, Nor Ashidi Mat .
APPLIED SOFT COMPUTING, 2019, 80 :1-17
[5]  
[Anonymous], 2011, Journal of Computational Information Systems
[6]   Heap-based optimizer inspired by corporate rank hierarchy for global optimization [J].
Askari, Qamar ;
Saeed, Mehreen ;
Younas, Irfan .
EXPERT SYSTEMS WITH APPLICATIONS, 2020, 161
[7]  
Awad N., 2016, PROBLEM DEFINITIONS, P1
[9]   A Fast and efficient stochastic opposition-based learning for differential evolution in numerical optimization [J].
Choi, Tae Jong ;
Togelius, Julian ;
Cheong, Yun-Gyung .
SWARM AND EVOLUTIONARY COMPUTATION, 2021, 60
[10]   Exploration and Exploitation in Evolutionary Algorithms: A Survey [J].
Crepinsek, Matej ;
Liu, Shih-Hsi ;
Mernik, Marjan .
ACM COMPUTING SURVEYS, 2013, 45 (03)