Black-Box Optimization Revisited: Improving Algorithm Selection Wizards Through Massive Benchmarking

被引:23
作者
Meunier, Laurent [1 ]
Rakotoarison, Herilalaina [2 ]
Wong, Pak Kan [3 ]
Roziere, Baptiste [1 ]
Rapin, Jeremy [1 ]
Teytaud, Olivier [1 ]
Moreau, Antoine [4 ]
Doerr, Carola [5 ]
机构
[1] Facebook AI Res, F-75004 Paris, France
[2] Univ Paris Saclay, INRIA, LRI, TAU, F-91405 Orsay, France
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Univ Clermont Auvergne, CNRS, Inst Pascal, SIGMA Clermont, F-63000 Clermont Ferrand, France
[5] Sorbonne Univ, LIP6, F-75252 Paris, France
关键词
Benchmark testing; Optimization; Linear programming; Reproducibility of results; Heuristic algorithms; Open source software; Training; Benchmarking; black-box optimization; GLOBAL OPTIMIZATION; SELF-ADAPTATION; CMA-ES; PERFORMANCE; EVOLUTION; SEARCH;
D O I
10.1109/TEVC.2021.3108185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing studies in black-box optimization suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing of different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. We address this shortcoming by introducing in this work a general-purpose algorithm selection wizard that was designed and tested on a previously unseen breadth of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. Not only did we use the already very extensive benchmark environment available in Nevergrad, but we also extended it significantly by adding a number of additional benchmark suites, including Pyomo, Photonics, large-scale global optimization (LSGO), and MuJoCo. Our wizard achieves competitive performance on all benchmark suites. It significantly outperforms previous state-of-the-art algorithms on some of the suites, including YABBOB and LSGO. Its excellent performance is obtained without any task-specific parametrization. The algorithm selection wizard, all of its base solvers, as well as the benchmark suites are available for reproducible research in the open-source Nevergrad platform.
引用
收藏
页码:490 / 500
页数:11
相关论文
共 72 条
[1]   Projection-Based Restricted Covariance Matrix Adaptation for High Dimension [J].
Akimoto, Youhei ;
Hansen, Nikolaus .
GECCO'16: PROCEEDINGS OF THE 2016 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2016, :197-204
[2]  
[Anonymous], 2016, P 30 C NEUR INF PROC
[3]  
[Anonymous], 1987, THEORY RECURSIVE FUN
[4]  
[Anonymous], 2012, P 25 INT C NEURIPS
[5]  
[Anonymous], 2010, MULTIDISCIPLINARY DE
[6]  
Arnold DV, 2002, IEEE T EVOLUT COMPUT, V6, P30, DOI [10.1109/4235.985690, 10.1023/A:1015059928466]
[7]  
Artelys S., 2015, ARTELYS SQP WINS BBC
[8]  
Auger A, 2005, GECCO 2005: GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, VOLS 1 AND 2, P857
[9]  
Baskiotis N., 2004, PROC INT C MACH LEAR, P1
[10]   Per Instance Algorithm Configuration of CMA-ES with Limited Budget [J].
Belkhir, Nacim ;
Dreo, Johann ;
Saveant, Pierre ;
Schoenauer, Marc .
PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, :681-688