Differential evolution with mixed mutation strategy based on deep reinforcement learning

被引:49
作者
Tan, Zhiping [1 ]
Li, Kangshun [2 ]
机构
[1] Guangdong Polytech Normal Univ, Coll Elect & Informat, Guangzhou 510665, Peoples R China
[2] South China Agr Univ, Coll Math & Informat, Guangzhou 510642, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Differential evolution; Mixed mutation strategy; Fitness landscape; Deep reinforcement learning; Deep Q-learning; ALGORITHM; OPTIMIZATION; ENSEMBLE; PARAMETERS; OPERATOR; DESIGN;
D O I
10.1016/j.asoc.2021.107678
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of differential evolution (DE) algorithm significantly depends on mutation strategy. However, there are six commonly used mutation strategies in DE. It is difficult to select a reasonable mutation strategy in solving the different real-life optimization problems. In general, the selection of the most appropriate mutation strategy is based on personal experience. To address this problem, a mixed mutation strategy DE algorithm based on deep Q-network (DQN), named DEDQN is proposed in this paper, in which a deep reinforcement learning approach realizes the adaptive selection of mutation strategy in the evolution process. Two steps are needed for the application of DQN to DE. First, the DQN is trained offline through collecting the data about fitness landscape and the benefit (reward) of applying each mutation strategy during multiple runs of DEDQN tackling the training functions. Second, the mutation strategy is predicted by the trained DQN at each generation according to the fitness landscape of every test function. Besides, a historical memory parameter adaptation mechanism is also utilized to improve the DEDQN. The performance of the DEDQN algorithm is evaluated by the CEC2017 benchmark function set, and five state-of-the-art DE algorithms are compared with the DEDQN in the experiments. The experimental results indicate the competitive performance of the proposed algorithm. (C) 2021 Published by Elsevier B.V.
引用
收藏
页数:13
相关论文
共 44 条
[1]   Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution [J].
Abd Elaziz, Mohamed ;
Xiong, Shengwu ;
Jayasena, K. P. N. ;
Li, Lin .
KNOWLEDGE-BASED SYSTEMS, 2019, 169 :39-52
[2]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[3]  
Awad NH, 2016, IEEE C EVOL COMPUTAT, P2958, DOI 10.1109/CEC.2016.7744163
[4]   A novel beta differential evolution algorithm-based fast multilevel thresholding for color image segmentation [J].
Bhandari, Ashish Kumar .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (09) :4583-4613
[5]   Differential Evolution: A review of more than two decades of research [J].
Bilal ;
Pant, Millie ;
Zaheer, Hira ;
Garcia-Hernandez, Laura ;
Abraham, Ajith .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 90
[6]   An efficient evolutionary algorithm for engineering design problems [J].
Bilel, Najlawi ;
Mohamed, Nejlaoui ;
Zouhaier, Affi ;
Lotfi, Romdhane .
SOFT COMPUTING, 2019, 23 (15) :6197-6213
[7]   Minimizing harmonic distortion in power system with optimal design of hybrid active power filter using differential evolution [J].
Biswas, Partha P. ;
Suganthan, P. N. ;
Amaratunga, Gehan A. J. .
APPLIED SOFT COMPUTING, 2017, 61 :486-496
[8]   Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems [J].
Brest, Janez ;
Greiner, Saso ;
Boskovic, Borko ;
Mernik, Marjan ;
Zumer, Vijern .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2006, 10 (06) :646-657
[9]   A novel enhanced whale optimization algorithm for global optimization [J].
Chakraborty, Sanjoy ;
Saha, Apu Kumar ;
Sharma, Sushmita ;
Mirjalili, Seyedali ;
Chakraborty, Ratul .
COMPUTERS & INDUSTRIAL ENGINEERING, 2021, 153
[10]   A ranking-based adaptive artificial bee colony algorithm for global numerical optimization [J].
Cui, Laizhong ;
Li, Genghui ;
Wang, Xizhao ;
Lin, Qiuzhen ;
Chen, Jianyong ;
Lu, Nan ;
Lu, Jian .
INFORMATION SCIENCES, 2017, 417 :169-185