Parallel efficient global optimization by using the minimum energy criterion

被引:2
作者
Li, ShiXiang [1 ]
Tian, Yubin [2 ]
Wang, Dianpeng [2 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Math, Key Lab Math Theory & Computat Informat Secur, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Efficient global optimization; expected improvement; minimum energy criterion; parallel strategy; ALGORITHM;
D O I
10.1080/00949655.2023.2217707
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In optimization problems, the expensive black-box function implies severely restricted budgets in terms of evaluation. Some Bayesian optimization methods have been proposed to solve this problem, such as expected improvement (EI) and hierarchical expected improvement (HEI). Neither EI nor HEI is parallel, which depends on a one-point-at-a-time strategy. In this work, a new parallel Bayesian framework based on the minimum energy criterion is proposed to improve these popular one-point methods. The new proposed framework can save time and costs by reducing the number of iterations and avoid the local optimization trap by encouraging the exploration of the optimization space. Additionally, a shrink-augment strategy is also introduced to correct the local surrogate model for the black-box function adaptively, which could also benefit the optimization. Some numerical and illustrative experiments are presented to demonstrate the superiority of our proposed method over some other Bayesian methods. The results show that the novel framework can balance exploitation and exploration well and has great performance in global optimization.
引用
收藏
页码:3104 / 3125
页数:22
相关论文
共 34 条
[1]  
[Anonymous], 2014, R LANG ENV STAT COMP
[2]  
Azimi Javad., 2010, ADV NEURAL INFORM PR, V23, P109
[3]  
Bliek L., 2021, ARXIV
[4]   Parallel surrogate-assisted optimization: Batched Bayesian Neural Network-assisted GA versus q-EGO [J].
Briffoteaux, Guillaume ;
Gobert, Maxime ;
Ragonnet, Romain ;
Gmys, Jan ;
Mezmaz, Mohand ;
Melab, Nouredine ;
Tuyttens, Daniel .
SWARM AND EVOLUTIONARY COMPUTATION, 2020, 57
[5]  
Bull AD, 2011, J MACH LEARN RES, V12, P2879
[6]   A Hierarchical Expected Improvement Method for Bayesian Optimization [J].
Chen, Zhehui ;
Mak, Simon ;
Wu, C. F. Jeff .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (546) :1619-1632
[7]  
Chevalier Clement, 2013, Learning and Intelligent Optimization. 7th International Conference, LION 7. Revised Selected Papers: LNCS 7997, P59, DOI 10.1007/978-3-642-44973-4_7
[8]  
Contal E., 2013, Machine Learning and Knowledge Discovery in Databases, P225, DOI DOI 10.1007/978-3-642-40988-2_15
[9]  
Desautels T., 2012, Proceedings of the 29th International Conference on Machine Learning, ICML, V2, P1191
[10]  
Eriksson D, 2019, ADV NEUR IN, V32