Truncation-learning-driven surrogate assisted social learning particle swarm optimization for computationally expensive problem

被引:5
作者
Yu, Haibo [1 ]
Kang, Li [2 ]
Tan, Ying [3 ]
Sun, Chaoli [3 ]
Zeng, Jianchao [1 ,4 ]
机构
[1] North Univ China, Inst Big Data & Visual Comp, Taiyuan 030051, Peoples R China
[2] Hohai Univ, Key Lab Integrated Regulat & Resource Dev Shallow, Minist Educ, Nanjing 210098, Peoples R China
[3] Taiyuan Univ Sci & Technol, Dept Comp Sci & Technol, Taiyuan 030024, Peoples R China
[4] Taiyuan Univ Sci & Technol, Div Ind & Syst Engn, Taiyuan 030024, Peoples R China
基金
山西省青年科学基金;
关键词
Truncation learning; Radial basis function; Greedy sampling; Particle swarm optimization; Expensive problem; EFFICIENT GLOBAL OPTIMIZATION; EVOLUTIONARY OPTIMIZATION; DIFFERENTIAL EVOLUTION; FITNESS APPROXIMATION; NEURAL-NETWORK; ALGORITHM; DESIGN; GENERATION; STRATEGY; SUPPORT;
D O I
10.1016/j.asoc.2020.106812
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Surrogate-assisted evolutionary optimization greatly slashes the computational burden of evolutionary algorithms for computationally expensive problems. However, new issues arise concerning the compatibility and fault tolerance of surrogates, evolutionary learning operators, and problem property. To this end, this paper proposes a truncation-learning-driven surrogate assisted social learning particle swarm optimizer (TL-SSLPSO) to coordinate these three ingredients. For avoiding and correcting the deceptions induced by the low confidence exemplars due to the surrogate in behavior learning, TLSSLPSO equally segments the iterative population into multiple sub-populations with different fitness levels and selects exemplars from the randomly selected high-level sub-populations for the behavior learning of low-level sub-population, while truncating the behavior learning of the highest-level sub population composed of some of the best approximated or real evaluated particles and retaining the sub-population directly to the next generation. Besides, a greedy sampling strategy is employed to find promising solutions with better fitness versus the global best to complement the truncation learning. Extensive experiments on twenty-four widely used benchmark problems and a stepped cantilever beam design problem with 17 steps are conducted to assess the effectiveness of cooperation between truncation learning and greedy sampling and comparisons with several state-of-the-art algorithms demonstrate the superiority of the proposed method. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:23
相关论文
共 83 条
[1]  
Allmendinger R, 2017, J MULTI-CRITERIA DEC, V24, P5, DOI 10.1002/mcda.1605
[2]  
[Anonymous], 2002, GECCO 2002
[3]  
[Anonymous], 2018, P C GEN EV COMP
[4]  
[Anonymous], 1998, 7 AIAA USAF NASA ISS
[5]  
Bischl B., 2014, INT C LEARN INT OPT, DOI 10.1007/978-3-319-09584-4_17
[6]   Accelerating evolutionary algorithms with Gaussian process fitness function models [J].
Büche, D ;
Schraudolph, NN ;
Koumoutsakos, P .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2005, 35 (02) :183-194
[7]   Sequential approximation optimization assisted particle swarm optimization for expensive problems [J].
Cai, Xiwen ;
Gao, Liang ;
Li, Fan .
APPLIED SOFT COMPUTING, 2019, 83
[8]   Surrogate-guided differential evolution algorithm for high dimensional expensive problems [J].
Cai, Xiwen ;
Gao, Liang ;
Li, Xinyu ;
Qiu, Haobo .
SWARM AND EVOLUTIONARY COMPUTATION, 2019, 48 :288-311
[9]   A multi-point sampling method based on kriging for global optimization [J].
Cai, Xiwen ;
Qiu, Haobo ;
Gao, Liang ;
Yang, Peng ;
Shao, Xinyu .
STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2017, 56 (01) :71-88
[10]   Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution [J].
Chen, Stephen ;
Montgomery, James ;
Bolufe-Roehler, Antonio .
APPLIED INTELLIGENCE, 2015, 42 (03) :514-526