IMODEII: an Improved IMODE algorithm based on the Reinforcement Learning

被引:13
|
作者
Sallam, Karam M. [1 ]
Abdel-Basset, Mohamed [2 ]
El-Abd, Mohammed [3 ]
Wagdy, Ali [4 ,5 ]
机构
[1] Univ Canberra, Sch IT & Syst, Canberra, ACT 2601, Australia
[2] Zagazig Univ, Fac Comp & Informat, Zagazig, Egypt
[3] Amer Univ Kuwait, Coll Engn & Appl Sci, Kuwait, Kuwait
[4] Cairo Univ, Fac Grad Studies Stat Res, Operat Res Dept, Giza 12613, Egypt
[5] Amer Univ Cairo, Sch Sci Engn, Dept Math & Actuarial Sci, Cairo 11835, Egypt
关键词
reinforcement learning; differential evolution; evolutionary algorithms; unconstrained optimisation; DIFFERENTIAL EVOLUTION; SELECTION MECHANISM; GENETIC ALGORITHM; OPTIMIZATION; ENSEMBLE; PARAMETERS; OPERATORS; HYBRID; SOLVE;
D O I
10.1109/CEC55065.2022.9870420
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Orbit correction based on improved reinforcement learning algorithm
    Chen, Xiaolong
    Jia, Yongzhi
    Qi, Xin
    Wang, Zhijun
    He, Yuan
    PHYSICAL REVIEW ACCELERATORS AND BEAMS, 2023, 26 (04)
  • [2] Improved Artificial Bee Colony Algorithm Based on Reinforcement Learning
    Ma, Ping
    Zhang, Hong-Li
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, ICIC 2016, PT II, 2016, 9772 : 721 - 732
  • [3] An improved multiagent reinforcement learning algorithm
    Meng, XP
    Babuska, R
    Busoniu, L
    Chen, Y
    Tan, WY
    2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Proceedings, 2005, : 337 - 343
  • [4] A HEART FAILURE PREDICTION ALGORITHM BASED ON IMPROVED REINFORCEMENT LEARNING FRAMEWORK
    Zhang, Yijie
    Yang, Xiangbo
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2024, 24 (08)
  • [5] An Improved Multi-objective Optimization Algorithm Based on Reinforcement Learning
    Liu, Jun
    Zhou, Yi
    Qiu, Yimin
    Li, Zhongfeng
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT I, 2022, : 501 - 513
  • [6] A new asynchronous reinforcement learning algorithm based on improved parallel PSO
    Shifei Ding
    Wei Du
    Xingyu Zhao
    Lijuan Wang
    Weikuan Jia
    Applied Intelligence, 2019, 49 : 4211 - 4222
  • [7] Path planning for mobile robot based on improved reinforcement learning algorithm
    Xu X.
    Yuan J.
    Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology, 2019, 27 (03): : 314 - 320
  • [8] A new asynchronous reinforcement learning algorithm based on improved parallel PSO
    Ding, Shifei
    Du, Wei
    Zhao, Xingyu
    Wang, Lijuan
    Jia, Weikuan
    APPLIED INTELLIGENCE, 2019, 49 (12) : 4211 - 4222
  • [9] An improved genetic algorithm based on reinforcement learning for aircraft assembly scheduling problem
    Wen, Xiaoyu
    Zhang, Xinyu
    Xing, Hongwen
    Ye, Guoyong
    Li, Hao
    Zhang, Yuyan
    Wang, Haoqi
    COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 193
  • [10] An improved reinforcement learning algorithm based on knowledge transfer and applications in autonomous vehicles
    Ding, Derui
    Ding, Zifan
    Wei, Guoliang
    Han, Fei
    NEUROCOMPUTING, 2019, 361 : 243 - 255