Differential evolution with mixed mutation strategy based on deep reinforcement learning

被引:49
作者
Tan, Zhiping [1 ]
Li, Kangshun [2 ]
机构
[1] Guangdong Polytech Normal Univ, Coll Elect & Informat, Guangzhou 510665, Peoples R China
[2] South China Agr Univ, Coll Math & Informat, Guangzhou 510642, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Differential evolution; Mixed mutation strategy; Fitness landscape; Deep reinforcement learning; Deep Q-learning; ALGORITHM; OPTIMIZATION; ENSEMBLE; PARAMETERS; OPERATOR; DESIGN;
D O I
10.1016/j.asoc.2021.107678
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of differential evolution (DE) algorithm significantly depends on mutation strategy. However, there are six commonly used mutation strategies in DE. It is difficult to select a reasonable mutation strategy in solving the different real-life optimization problems. In general, the selection of the most appropriate mutation strategy is based on personal experience. To address this problem, a mixed mutation strategy DE algorithm based on deep Q-network (DQN), named DEDQN is proposed in this paper, in which a deep reinforcement learning approach realizes the adaptive selection of mutation strategy in the evolution process. Two steps are needed for the application of DQN to DE. First, the DQN is trained offline through collecting the data about fitness landscape and the benefit (reward) of applying each mutation strategy during multiple runs of DEDQN tackling the training functions. Second, the mutation strategy is predicted by the trained DQN at each generation according to the fitness landscape of every test function. Besides, a historical memory parameter adaptation mechanism is also utilized to improve the DEDQN. The performance of the DEDQN algorithm is evaluated by the CEC2017 benchmark function set, and five state-of-the-art DE algorithms are compared with the DEDQN in the experiments. The experimental results indicate the competitive performance of the proposed algorithm. (C) 2021 Published by Elsevier B.V.
引用
收藏
页数:13
相关论文
共 50 条
[21]   Adaptive Differential Evolution With Information Entropy-Based Mutation Strategy [J].
Wang, Liujing ;
Zhou, Xiaogen ;
Xie, Tengyu ;
Liu, Jun ;
Zhang, Guijun .
IEEE ACCESS, 2021, 9 (09) :146783-146796
[22]   A Human Mixed Strategy Approach to Deep Reinforcement Learning [J].
Ngoc Duy Nguyen ;
Nahavandi, Saeid ;
Thanh Nguyen .
2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, :4023-4028
[23]   Dynamic fitness landscape-based adaptive mutation strategy selection mechanism for differential evolution [J].
Tan, Zhiping ;
Tang, Yu ;
Huang, Huasheng ;
Luo, Shaoming .
INFORMATION SCIENCES, 2022, 607 :44-61
[24]   Deep reinforcement learning assisted co-evolutionary differential evolution for constrained optimization [J].
Hu, Zhenzhen ;
Gong, Wenyin ;
Pedrycz, Witold ;
Li, Yanchi .
SWARM AND EVOLUTIONARY COMPUTATION, 2023, 83
[25]   Fitness distance correlation and mixed search strategy for differential evolution [J].
Li, Wei ;
Meng, Xiang ;
Huang, Ying .
NEUROCOMPUTING, 2021, 458 :514-525
[26]   Differential evolution with proration-based mutation strategy and multi-segment mixed parameter setting for numerical optimization [J].
Yan, Xueqing ;
Tian, Mengnan ;
Li, Yongming .
INFORMATION SCIENCES, 2024, 665
[27]   Differential evolution based on two-stage mutation strategy and multi-stage parameter control [J].
Xu, Huarong ;
Lin, Shengke ;
Zhang, Zhiyu ;
Deng, Qianwei .
APPLIED SOFT COMPUTING, 2025, 180
[28]   Reinforcement learning-based particle swarm optimization with neighborhood differential mutation strategy [J].
Li, Wei ;
Liang, Peng ;
Sun, Bo ;
Sun, Yafeng ;
Huang, Ying .
SWARM AND EVOLUTIONARY COMPUTATION, 2023, 78
[29]   An adaptive mutation strategy correction framework for differential evolution [J].
Deng, Libao ;
Qin, Yifan ;
Li, Chunlei ;
Zhang, Lili .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (15) :11161-11182
[30]   Deep Reinforcement Learning for Adaptive Parameter Control in Differential Evolution for Multi-Objective Optimization [J].
Reijnen, Robbert ;
Zhang, Yingqian ;
Bukhsh, Zaharah ;
Guzek, Mateusz .
2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, :804-811