Selecting a diverse set of benchmark instances from a tunable model problem for black-box discrete optimization algorithms

被引:12
|
作者
Weise, Thomas [1 ]
Chen, Yan [1 ]
Li, Xinlu [1 ]
Wu, Zhize [1 ]
机构
[1] Hefei Univ, Sch Artificial Intelligence & Big Data, Inst Appl Optimizat, Jinxiu Dadao 99, Hefei 230601, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Experimentation; Benchmarking; Optimization; Runtime behavior; Black-box optimization; Discrete optimization; FITNESS LANDSCAPES; EVOLUTIONARY OPTIMIZATION; PHASE-TRANSITION; NK; NEUTRALITY; PERFORMANCE; ONEMAX;
D O I
10.1016/j.asoc.2020.106269
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As the number of practical applications of discrete black-box metaheuristics is growing faster and faster, the benchmarking of these algorithms is rapidly gaining importance. While new algorithms are often introduced for specific problem domains, researchers are also interested in which general problem characteristics are hard for which type of algorithm. The W-Model is a benchmark function for discrete black-box optimization, which allows for the easy, fast, and reproducible generation of problem instances exhibiting characteristics such as ruggedness, deceptiveness, epistasis, and neutrality in a tunable way. We conduct the first large-scale study with the W-Model in its fixed-length singleobjective form, investigating 17 algorithm configurations (including Evolutionary Algorithms and local searches) and 8372 problem instances. We develop and apply a machine learning methodology to automatically discover several clusters of optimization process runtime behaviors as well as their reasons grounded in the algorithm and model parameters. Both a detailed statistical evaluation and our methodology confirm that the different model parameters allow us to generate problem instances of different hardness, but also find that the investigated algorithms struggle with different problem characteristics. With our methodology, we select a set of 19 diverse problem instances with which researchers can conduct a fast but still in-depth analysis of algorithm performance. The bestperforming algorithms in our experiment were Evolutionary Algorithms applying Frequency Fitness Assignment, which turned out to be robust over a wide range of problem settings and solved more instances than the other tested algorithms. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 1 条
  • [1] Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009
    Hansen, Nikolaus
    Auger, Anne
    Ros, Raymond
    Finck, Steffen
    Posik, Petr
    GECCO-2010 COMPANION PUBLICATION: PROCEEDINGS OF THE 12TH ANNUAL GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2010, : 1689 - 1696