A self-learning differential evolution algorithm with population range indicator

被引:4
|
作者
Zhao, Fuqing [1 ]
Zhou, Hao [1 ]
Xu, Tianpeng [1 ]
Jonrinaldi [2 ]
机构
[1] Lanzhou Univ Technol, Sch Comp & Commun, Lanzhou 730050, Peoples R China
[2] Univ Andalas, Dept Ind Engn, Padang 25163, Indonesia
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Differential evolution; Double deep Q network; Population range indicator; OPTIMIZATION;
D O I
10.1016/j.eswa.2023.122674
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The differential evolution (DE) algorithm is widely regarded as one of the most influential evolutionary algorithms for addressing complex optimization problems. However, the fixed mutation strategy limits the adaptive ability of DE, and the lack of utilization of historical information limits the optimization ability of DE. In this paper, an indicator-based self-learning differential evolution algorithm (ISDE) is proposed. A jump out mechanism based on deep reinforcement learning is adopted to control the mutation intensity of the population. The neural network in the jump out mechanism is designed as a decision maker. The mutation intensity of the population is controlled by the neural network, and the neural network are trained by a double deep Q network algorithm based on the continuous data generated during the evolution process. A population range indicator (PRI) is utilized to describe individual differences in the population. A diversity maintenance mechanism is designed to maintain individual differences according to the value of PRI. The experimental results reveal that the comprehensive performance of ISDE is superior to comparison algorithms on CEC 2017 real-parameter numerical optimization.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Self-learning differential evolution algorithm for scheduling of internal tasks in cross-docking
    Buakum, Dollaya
    Wisittipanich, Warisa
    SOFT COMPUTING, 2022, 26 (21) : 11809 - 11826
  • [2] Self-learning differential evolution algorithm for scheduling of internal tasks in cross-docking
    Dollaya Buakum
    Warisa Wisittipanich
    Soft Computing, 2022, 26 : 11809 - 11826
  • [3] An improved differential evolution algorithm using learning automata and population topologies
    Kordestani, Javidan Kazemi
    Ahmadi, Ali
    Meybodi, Mohammad Reza
    APPLIED INTELLIGENCE, 2014, 41 (04) : 1150 - 1169
  • [4] Deep-space trajectory optimizations using differential evolution with self-learning
    Choi, Jin Haeng
    Lee, Jinah
    Park, Chandeok
    ACTA ASTRONAUTICA, 2022, 191 : 258 - 269
  • [5] Binary differential evolution with self-learning for multi-objective feature selection
    Zhang, Yong
    Gong, Dun-wei
    Gao, Xiao-zhi
    Tian, Tian
    Sun, Xiao-yan
    INFORMATION SCIENCES, 2020, 507 : 67 - 85
  • [6] Learning to Learn Evolutionary Algorithm: A Learnable Differential Evolution
    Liu, Xin
    Sun, Jianyong
    Zhang, Qingfu
    Wang, Zhenkun
    Xu, Zongben
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (06): : 1605 - 1620
  • [7] A Double Deep Q Network Guided Online Learning Differential Evolution Algorithm
    Zhao, Fuqing
    Yang, Mingxiang
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14862 : 196 - 208
  • [8] Improved differential evolution algorithm with decentralisation of population
    Ali, Musrrat
    Pant, Millie
    Abraham, Ajith
    INTERNATIONAL JOURNAL OF BIO-INSPIRED COMPUTATION, 2011, 3 (01) : 17 - 30
  • [9] A self-adaptive multi-population differential evolution algorithm
    Zhu, Lin
    Ma, Yongjie
    Bai, Yulong
    NATURAL COMPUTING, 2020, 19 (01) : 211 - 235
  • [10] An improved Differential Evolution algorithm using learning automata and population topologies
    Javidan Kazemi Kordestani
    Ali Ahmadi
    Mohammad Reza Meybodi
    Applied Intelligence, 2014, 41 : 1150 - 1169