Superiority combination learning distributed particle swarm optimization for large-scale optimization

被引:14
作者
Wang, Zi-Jia [1 ]
Yang, Qiang [2 ]
Zhang, Yu -Hui [3 ]
Chen, Shu-Hong [1 ]
Wang, Yuan -Gen [1 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Nanjing 210044, Peoples R China
[3] Dongguan Univ Technol, Sch Comp Sci & Technol, Dongguan, Peoples R China
关键词
Superiority combination learning strategy; Particle swarm optimization; Large-scale optimization; Master-slave multi-subpopulation; distributed; COOPERATIVE COEVOLUTION; EVOLUTIONARY;
D O I
10.1016/j.asoc.2023.110101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale optimization problems (LSOPs) have become increasingly significant and challenging in the evolutionary computation (EC) community. This article proposes a superiority combination learning distributed particle swarm optimization (SCLDPSO) for LSOPs. In algorithm design, a master-slave multi-subpopulation distributed model is adopted, which can obtain the full communication and information exchange among different subpopulations, further achieving the diversity enhancement. Moreover, a superiority combination learning (SCL) strategy is proposed, where each worse particle in the poor-performance subpopulation randomly selects two well-performance subpopulations with better particles for learning. In the learning process, each well-performance subpopulation generates a learning particle by merging different dimensions of different particles, which can fully combine the superiorities of all the particles in the current well-performance subpopulation. The worse particle can significantly improve itself by learning these two superiority combination particles from the well -performance subpopulations, leading to a successful search. Experimental results show that SCLDPSO performs better than or at least comparable with other state-of-the-art large-scale optimization algorithms on both CEC2010 and CEC2013 large-scale optimization test suites, including the winner of the competition on large-scale optimization. Besides, the extended experiments with increasing dimensions to 2000 show the scalability of SCLDPSO. At last, an application in large-scale portfolio optimization problems further illustrates the applicability of SCLDPSO.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Cross-Generation Elites Guided Particle Swarm Optimization for Large Scale Optimization
    Xie, Han-Yu
    Yang, Qiang
    Hu, Xiao-Min
    Chen, Wei-Neng
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [42] A Particle Swarm Optimization Decomposition Strategy for Large Scale Global Optimization
    McDevitt, Liam J. S.
    Ombuki-Berman, Beatrice M.
    Engelbrecht, Andries P.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 1574 - 1581
  • [43] A Population Cooperation based Particle Swarm Optimization algorithm for large-scale multi-objective optimization
    Lu, Yongfan
    Li, Bingdong
    Liu, Shengcai
    Zhou, Aimin
    SWARM AND EVOLUTIONARY COMPUTATION, 2023, 83
  • [44] Transfer-Based Particle Swarm Optimization for Large-Scale Dynamic Optimization With Changing Variable Interactions
    Liu, Xiao-Fang
    Zhan, Zhi-Hui
    Zhang, Jun
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2024, 28 (06) : 1633 - 1643
  • [45] A Comprehensive Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization
    Liu, Songbai
    Lin, Qiuzhen
    Li, Qing
    Tan, Kay Chen
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (09): : 5829 - 5842
  • [46] Cooperative coevolutionary multi-guide particle swarm optimization algorithm for large-scale multi-objective optimization problems
    Madani, Amirali
    Engelbrecht, Andries
    Ombuki-Berman, Beatrice
    SWARM AND EVOLUTIONARY COMPUTATION, 2023, 78
  • [47] A particle swarm optimizer with multi-level population sampling and dynamic p-learning mechanisms for large-scale optimization
    Sheng, Mengmeng
    Wang, Zidong
    Liu, Weibo
    Wang, Xi
    Chen, Shengyong
    Liu, Xiaohui
    KNOWLEDGE-BASED SYSTEMS, 2022, 242
  • [48] An agent-assisted heterogeneous learning swarm optimizer for large-scale optimization
    Sun, Yu
    Cao, Han
    SWARM AND EVOLUTIONARY COMPUTATION, 2024, 89
  • [49] A multi-swarm optimizer with a reinforcement learning mechanism for large-scale optimization
    Wang, Xujie
    Wang, Feng
    He, Qi
    Guo, Yinan
    SWARM AND EVOLUTIONARY COMPUTATION, 2024, 86
  • [50] Ranking-based biased learning swarm optimizer for large-scale optimization
    Deng, Hanbo
    Peng, Lizhi
    Zhang, Haibo
    Yang, Bo
    Chen, Zhenxiang
    INFORMATION SCIENCES, 2019, 493 : 120 - 137