Superiority combination learning distributed particle swarm optimization for large-scale optimization

被引:14
作者
Wang, Zi-Jia [1 ]
Yang, Qiang [2 ]
Zhang, Yu -Hui [3 ]
Chen, Shu-Hong [1 ]
Wang, Yuan -Gen [1 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Nanjing 210044, Peoples R China
[3] Dongguan Univ Technol, Sch Comp Sci & Technol, Dongguan, Peoples R China
关键词
Superiority combination learning strategy; Particle swarm optimization; Large-scale optimization; Master-slave multi-subpopulation; distributed; COOPERATIVE COEVOLUTION; EVOLUTIONARY;
D O I
10.1016/j.asoc.2023.110101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale optimization problems (LSOPs) have become increasingly significant and challenging in the evolutionary computation (EC) community. This article proposes a superiority combination learning distributed particle swarm optimization (SCLDPSO) for LSOPs. In algorithm design, a master-slave multi-subpopulation distributed model is adopted, which can obtain the full communication and information exchange among different subpopulations, further achieving the diversity enhancement. Moreover, a superiority combination learning (SCL) strategy is proposed, where each worse particle in the poor-performance subpopulation randomly selects two well-performance subpopulations with better particles for learning. In the learning process, each well-performance subpopulation generates a learning particle by merging different dimensions of different particles, which can fully combine the superiorities of all the particles in the current well-performance subpopulation. The worse particle can significantly improve itself by learning these two superiority combination particles from the well -performance subpopulations, leading to a successful search. Experimental results show that SCLDPSO performs better than or at least comparable with other state-of-the-art large-scale optimization algorithms on both CEC2010 and CEC2013 large-scale optimization test suites, including the winner of the competition on large-scale optimization. Besides, the extended experiments with increasing dimensions to 2000 show the scalability of SCLDPSO. At last, an application in large-scale portfolio optimization problems further illustrates the applicability of SCLDPSO.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] A Two-Phase Learning-Based Swarm Optimizer for Large-Scale Optimization
    Lan, Rushi
    Zhu, Yu
    Lu, Huimin
    Liu, Zhenbing
    Luo, Xiaonan
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (12) : 6284 - 6293
  • [22] Adaptive multi-context cooperatively coevolving particle swarm optimization for large-scale problems
    Ruo-Li Tang
    Zhou Wu
    Yan-Jun Fang
    Soft Computing, 2017, 21 : 4735 - 4754
  • [23] A Dual-Competition-Based Particle Swarm Optimizer for Large-Scale Optimization
    Gao, Weijun
    Peng, Xianjie
    Guo, Weian
    Li, Dongyang
    MATHEMATICS, 2024, 12 (11)
  • [24] A particle swarm optimizer with dynamic balance of convergence and diversity for large-scale optimization
    Li, Dongyang
    Wang, Lei
    Guo, Weian
    Zhang, Maoqing
    Hu, Bo
    Wu, Qidi
    APPLIED SOFT COMPUTING, 2023, 132
  • [25] A Level-Based Learning Swarm Optimizer for Large-Scale Optimization
    Yang, Qiang
    Chen, Wei-Neng
    Da Deng, Jeremiah
    Li, Yun
    Gu, Tianlong
    Zhang, Jun
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2018, 22 (04) : 578 - 594
  • [26] Incremental particle swarm optimization for large-scale dynamic optimization with changing variable interactions
    Liu, Xiao-Fang
    Zhan, Zhi-Hui
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2023, 141
  • [27] Adaptive multi-context cooperatively coevolving particle swarm optimization for large-scale problems
    Tang, Ruo-Li
    Wu, Zhou
    Fang, Yan-Jun
    SOFT COMPUTING, 2017, 21 (16) : 4735 - 4754
  • [28] A sinusoidal social learning swarm optimizer for large-scale optimization
    Liu, Nengxian
    Pan, Jeng-Shyang
    Chu, Shu-Chuan
    Hu, Pei
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [29] A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization
    Yang, Qiang
    Zhang, Kai-Xuan
    Gao, Xu-Dong
    Xu, Dong-Dong
    Lu, Zhen-Yu
    Jeon, Sang-Woon
    Zhang, Jun
    MATHEMATICS, 2022, 10 (07)
  • [30] Compressed-Encoding Particle Swarm Optimization with Fuzzy Learning for Large-Scale Feature Selection
    Yang, Jia-Quan
    Chen, Chun-Hua
    Li, Jian-Yu
    Liu, Dong
    Li, Tao
    Zhan, Zhi-Hui
    SYMMETRY-BASEL, 2022, 14 (06):