Projected Gaussian Markov Improvement Algorithm for High-Dimensional Discrete Optimization via Simulation

被引:0
作者
Li, Xinru [1 ]
Song, Eunhye [2 ]
机构
[1] Gen Motors, 30500 Mound Rd, Warren, MI 48092 USA
[2] Georgia Inst Technol, 755 Ferst Dr, Atlanta, GA USA
来源
ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION | 2024年 / 34卷 / 03期
基金
美国国家科学基金会;
关键词
Gaussian Markov random field; high-dimensional discrete optimization via simulation; projection; Bayesian optimization; BAYESIAN OPTIMIZATION; DEPENDENCE;
D O I
10.1145/3649463
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This article considers a discrete optimization via simulation (DOvS) problem defined on a graph embedded in the high-dimensional integer grid. Several DOvS algorithms that model the responses at the solutions as a realization of a Gaussian Markov random field (GMRF) have been proposed exploiting its inferential power and computational benefits. However, the computational cost of inference increases exponentially in dimension. We propose the projected Gaussian Markov improvement algorithm (pGMIA), which projects the solution space onto a lower-dimensional space creating the region-layer graph to reduce the cost of inference. Each node on the region-layer graph can be mapped to a set of solutions projected to the node; these solutions form a lower-dimensional solution-layer graph. We define the response at each region-layer node to be the average of the responses within the corresponding solution-layer graph. From this relation, we derive the region-layer GMRF to model the region-layer responses. The pGMIA alternates between the two layers to make a sampling decision at each iteration. It first selects a region-layer node based on the lower-resolution inference provided by the region-layer GMRF, then makes a sampling decision among the solutions within the solution-layer graph of the node based on the higher-resolution inference from the solution-layer GMRF. To solve even higher-dimensional problems (e.g., 100 dimensions), we also propose the pGMIA+: a multi-layer extension of the pGMIA. We show that both pGMIA and pGMIA+ converge to the optimum almost surely asymptotically and empirically demonstrate their competitiveness against state-of-the-art high-dimensional Bayesian optimization algorithms.
引用
收藏
页数:29
相关论文
共 37 条
  • [11] Hensman J, 2015, JMLR WORKSH CONF PRO, V38, P351
  • [12] Hoffman Michael, 2018, P ANN C PROGN HLTH M
  • [13] Discrete optimization via simulation using COMPASS
    Hong, LJ
    Nelson, BL
    [J]. OPERATIONS RESEARCH, 2006, 54 (01) : 115 - 129
  • [14] Efficient global optimization of expensive black-box functions
    Jones, DR
    Schonlau, M
    Welch, WJ
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 1998, 13 (04) : 455 - 492
  • [15] Kandasamy K, 2015, PR MACH LEARN RES, V37, P295
  • [16] Letham B., 2020, ADV NEURAL INFORM PR, V33
  • [17] Li XR, 2020, WINT SIMUL C PROC, P2887, DOI [10.1109/WSC48552.2020.9384017, 10.1109/wsc48552.2020.9384017]
  • [18] Lu XY, 2018, PR MACH LEARN RES, V80
  • [19] Mathesen L, 2019, WINT SIMUL C PROC, P3528, DOI [10.1109/WSC40007.2019.9004851, 10.1109/wsc40007.2019.9004851]
  • [20] Mes MRK, 2011, J MACH LEARN RES, V12, P2931