Hyper-heuristics using multi-armed bandit models for multi-objective optimization

被引:17
作者
Almeida, Carolina P. [1 ]
Goncalves, Richard A. [1 ]
Venske, Sandra [1 ]
Luders, Ricardo [2 ]
Delgado, Myriam [2 ]
机构
[1] State Univ Midwest Parana, Guarapuava, PR, Brazil
[2] Univ Tecnol Fed Parana, Curitiba, Parana, Brazil
关键词
Multi-objective optimization; Hyper-heuristics; Multi-armed bandit; LOCAL SEARCH ALGORITHM; MEMETIC ALGORITHMS; GENETIC ALGORITHM; CLASSIFICATION; DECOMPOSITION; TARDINESS; SELECTION; LEVEL;
D O I
10.1016/j.asoc.2020.106520
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we explore different multi-armed bandit-based hyper-heuristics applied to the multi-objective permutation flow shop problem. It is a scheduling problem which has been extensively studied due to its relevance for industrial engineering. Three multi-armed bandit basic formulations are used in the hyper-heuristic selection mechanism: (i) classic, (ii) restless, and (iii) contextual. All the three approaches are considered online selection perturbative hyper-heuristics as they are designed to choose the low level heuristic (crossover and mutation operators) that should be applied to modify each solution during the search process. Performances of the proposed approaches combined with MOEA/DD (Multi-objective Evolutionary Algorithm based on Dominance and Decomposition) are evaluated using the hypervolume indicator and nonparametric statistical tests on a set of 220 problem instances with two and three objectives. The best proposed approach (contextual with a linear realizability assumption) is compared with a stand-alone version of MOEA/DD using the best considered crossover and mutation operator. It is also compared with three state-of-the-art multi-objective algorithms. Results show that this best approach is able to outperform MOEA/DD in scenarios with two and three objectives and by encompassing both, Pareto and decomposition strategies, is competitive with the state of the art in those scenarios. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Distributed Competitive Decision Making Using Multi-Armed Bandit Algorithms
    Almasri, Mahmoud
    Mansour, Ali
    Moy, Christophe
    Assoum, Ammar
    Le Jeune, Denis
    Osswald, Christophe
    WIRELESS PERSONAL COMMUNICATIONS, 2021, 118 (02) : 1165 - 1188
  • [42] Distributed Competitive Decision Making Using Multi-Armed Bandit Algorithms
    Mahmoud Almasri
    Ali Mansour
    Christophe Moy
    Ammar Assoum
    Denis Le Jeune
    Christophe Osswald
    Wireless Personal Communications, 2021, 118 : 1165 - 1188
  • [43] Improved differential evolution based on multi-armed bandit for multimodal optimization problems
    Suchitra Agrawal
    Aruna Tiwari
    Prathamesh Naik
    Arjun Srivastava
    Applied Intelligence, 2021, 51 : 7625 - 7646
  • [44] Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges
    Villar, Sofia S.
    Bowden, Jack
    Wason, James
    STATISTICAL SCIENCE, 2015, 30 (02) : 199 - 215
  • [45] An Incentive-Compatible Multi-Armed Bandit Mechanism
    Gonen, Rica
    Pavlov, Elan
    PODC'07: PROCEEDINGS OF THE 26TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING, 2007, : 362 - 363
  • [46] A Contextual Multi-Armed Bandit approach for NDN forwarding
    Mordjana, Yakoub
    Djamaa, Badis
    Senouci, Mustapha Reda
    Herzallah, Aymen
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 230
  • [47] Multi-armed bandit with sub-exponential rewards
    Jia, Huiwen
    Shi, Cong
    Shen, Siqian
    OPERATIONS RESEARCH LETTERS, 2021, 49 (05) : 728 - 733
  • [48] FEDERATED MULTI-ARMED BANDIT VIA UNCOORDINATED EXPLORATION
    Yan, Zirui
    Xiao, Quan
    Chen, Tianyi
    Tajer, Ali
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 5248 - 5252
  • [49] Mechanisms with learning for stochastic multi-armed bandit problems
    Shweta Jain
    Satyanath Bhat
    Ganesh Ghalme
    Divya Padmanabhan
    Y. Narahari
    Indian Journal of Pure and Applied Mathematics, 2016, 47 : 229 - 272
  • [50] MECHANISMS WITH LEARNING FOR STOCHASTIC MULTI-ARMED BANDIT PROBLEMS
    Jain, Shweta
    Bhat, Satyanath
    Ghalme, Ganesh
    Padmanabhan, Divya
    Narahari, Y.
    INDIAN JOURNAL OF PURE & APPLIED MATHEMATICS, 2016, 47 (02) : 229 - 272