How the Duration of the Learning Period Affects the Performance of Random Gradient Selection Hyper-Heuristics

被引:0
|
作者
Lissovoi, Andrei [1 ]
Oliveto, Pietro S. [1 ]
Warwicker, John Alasdair [2 ]
机构
[1] Univ Sheffield, Dept Comp Sci, Rigorous Res, Sheffield S1 4DP, S Yorkshire, England
[2] Karlsruhe Inst Technol, Inst Operat Res, D-76185 Karlsruhe, Germany
来源
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2020年 / 34卷
基金
英国工程与自然科学研究理事会;
关键词
COMPLEXITY; SEARCH;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LEADINGONES in the best expected time achievable with the available heuristics, if sufficiently long learning periods tau are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: RIDGE, where the HH has to learn that RLS1 is always the best low-level heuristic, and ONEMAX, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods tau are required for the HH to achieve optimal expected runtime for RIDGE. Conversely, a sub-logarithmic learning period is the best static choice for ONEMAX, while using super-linear values for tau increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both ONEMAX and RIDGE. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.
引用
收藏
页码:2376 / 2383
页数:8
相关论文
共 5 条
  • [1] Selection hyper-heuristics and job shop scheduling problems: How does instance size influence performance?
    Garza-Santisteban, Fernando
    Cruz-Duarte, Jorge Mario
    Amaya, Ivan
    Ortiz-Bayliss, Jose Carlos
    Conant-Pablos, Santiago Enrique
    Terashima-Marin, Hugo
    JOURNAL OF SCHEDULING, 2024, : 85 - 99
  • [2] Effective learning hyper-heuristics for the course timetabling problem
    Soria-Alcaraz, Jorge A.
    Ochoa, Gabriela
    Swan, Jerry
    Carpio, Martin
    Puga, Hector
    Burke, Edmund K.
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2014, 238 (01) : 77 - 86
  • [3] When move acceptance selection hyper-heuristics outperform Metropolis and elitist evolutionary algorithms and when not
    Lissovoi, Andrei
    Oliveto, Pietro S.
    Warwicker, John Alasdair
    ARTIFICIAL INTELLIGENCE, 2023, 314
  • [4] Stochastic mixed-model assembly line sequencing problem: Mathematical modeling and Q-learning based simulated annealing hyper-heuristics
    Mosadegh, H.
    Ghomi, S. M. T. Fatemi
    Suer, G. A.
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2020, 282 (02) : 530 - 544
  • [5] Take Your Time? How Activity Timing Affects Organizational Learning and Performance Outcomes
    Desai, Vinit
    Madsen, Peter
    ORGANIZATION SCIENCE, 2022, 33 (05) : 1707 - 1723