A probabilistic tournament learning swarm optimizer for large-scale optimization

被引:0
作者
Xu, Li-Ting [1 ]
Yang, Qiang [1 ]
Li, Jian-Yu [2 ]
Xu, Pei-Lan [1 ]
Lin, Xin [1 ]
Gao, Xu-Dong [1 ]
Lu, Zhen-Yu [1 ]
Zhang, Jun [2 ,3 ,4 ,5 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Nanjing, Peoples R China
[2] Nankai Univ, Coll Artificial Intelligence, Tianjin, Peoples R China
[3] Hanyang Univ, Dept Elect & Elect Engn, Ansan, South Korea
[4] Zhejiang Normal Univ, Jinhua, Peoples R China
[5] Chaoyang Univ Technol, Taichung, Taiwan
基金
新加坡国家研究基金会;
关键词
Large-scale optimization; Probabilistic updating; Tournament learning; Particle swarm optimization; High-dimensional optimization; PARTICLE SWARM; ALGORITHM;
D O I
10.1016/j.ins.2025.122189
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large-scale optimization problems (LSOPs) pose significant challenges to particle swarm optimization (PSO) algorithms due to their high-dimensional search space and abundant attractive local optima. To solve LSOPs with high efficacy, this paper devises a probabilistic tournament learning swarm optimizer (PTLSO). Specifically, PTLSO first assigns each particle a nonlinear updating probability upon its fitness ranking. In this manner, inferior particles have exponentially higher probabilities to be updated, while superior ones preserve exponentially higher probabilities to survive. Subsequently, when a particle is triggered for updating, two different tournament selection schemes are employed to choose two different superior exemplars from all those peers with better fitness than the particle. With this random tournament learning scheme, each updated particle tends to learn from much better peers in diverse directions. Thereby, the swarm in PTLSO not only maintains high updating diversity during the evolution but also is capable of rapidly moving towards optimal points. To further help PTLSO strike an effective equilibrium between exploration and exploitation, a linear population reduction mechanism is borrowed to dynamically shrink the swarm. By this means, a large swarm is committed to traverse the broad solution space in the initial period and then a smaller and smaller number of particles enable the swarm to concentrate on subtly mining the located optimal zones as the evolution continues. With the above mechanisms, PTLSO anticipatedly presents quite good performance in tackling LSOPs. Extensive experiments carried out on the widely recognized CEC2010 and CEC2013 LSOP problems have substantiated the efficacy of PTLSO by highlighting its conspicuous superiority over 11 latest large-scale PSOs in addressing LSOPs, particularly those with complex properties. Additionally, experiments on the CEC2010 LSOPs with the dimensionality varying from 500 to 2000 have further corroborated the good scalability of PTLSO in effectively addressing LSOPs with higher dimensionalities.
引用
收藏
页数:16
相关论文
共 50 条
[1]   Distributed Hierarchical Deep Reinforcement Learning for Large-Scale Grid Emergency Control [J].
Chen, Yixi ;
Zhu, Jizhong ;
Liu, Yun ;
Zhang, Le ;
Zhou, Jialin .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) :4446-4458
[2]   A Competitive Swarm Optimizer for Large Scale Optimization [J].
Cheng, Ran ;
Jin, Yaochu .
IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (02) :191-204
[3]   A social learning particle swarm optimization algorithm for scalable optimization [J].
Cheng, Ran ;
Jin, Yaochu .
INFORMATION SCIENCES, 2015, 291 :43-60
[4]   Ranking-based biased learning swarm optimizer for large-scale optimization [J].
Deng, Hanbo ;
Peng, Lizhi ;
Zhang, Haibo ;
Yang, Bo ;
Chen, Zhenxiang .
INFORMATION SCIENCES, 2019, 493 :120-137
[5]   An adaptive matrix-based evolutionary computation framework for EEG feature selection [J].
Duan, Danting ;
Sun, Bing ;
Yang, Qiang ;
Ye, Long ;
Zhang, Qin ;
Zhang, Jun .
MEMETIC COMPUTING, 2025, 17 (01)
[6]   Accelerated high-dimensional global optimization: A particle swarm optimizer incorporating homogeneous learning and autophagy mechanisms [J].
Fu, Wen-Yuan .
INFORMATION SCIENCES, 2023, 648
[7]   Balancing convergence and diversity preservation in dual search space for large scale particle swarm optimization [J].
Guo, Weian ;
Li, Li ;
Chen, Minchong ;
Ni, Wenke ;
Wang, Lei ;
Li, Dongyang .
APPLIED SOFT COMPUTING, 2025, 169
[8]   Co-evolutionary competitive swarm optimizer with three-phase for large-scale complex optimization problem [J].
Huang, Chen ;
Zhou, Xiangbing ;
Ran, Xiaojuan ;
Liu, Yi ;
Deng, Wuquan ;
Deng, Wu .
INFORMATION SCIENCES, 2023, 619 :2-18
[9]   Region Encoding Helps Evolutionary Computation Evolve Faster: A New Solution Encoding Scheme in Particle Swarm for Large-Scale Optimization [J].
Jian, Jun-Rong ;
Chen, Zong-Gan ;
Zhan, Zhi-Hui ;
Zhang, Jun .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2021, 25 (04) :779-793
[10]   Large-scale evolutionary optimization: a survey and experimental comparative study [J].
Jian, Jun-Rong ;
Zhan, Zhi-Hui ;
Zhang, Jun .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (03) :729-745