Automatic MILP solver configuration by learning problem similarities

被引:1
作者
Hosny, Abdelrahman [1 ]
Reda, Sherief [1 ,2 ]
机构
[1] Brown Univ, Dept Comp Sci, Providence, RI 02912 USA
[2] Brown Univ, Sch Engn, Providence, RI 02912 USA
关键词
Mixed integer linear programming; Algorithm configuration; Metric learning; Deep learning; ALGORITHM;
D O I
10.1007/s10479-023-05508-x
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
A large number of real-world optimization problems can be formulated as Mixed Integer Linear Programs (MILP). MILP solvers expose numerous configuration parameters to control their internal algorithms. Solutions, and their associated costs or runtimes, are significantly affected by the choice of the configuration parameters, even when problem instances have the same number of decision variables and constraints. On one hand, using the default solver configuration leads to suboptimal solutions. On the other hand, searching and evaluating a large number of configurations for every problem instance is time-consuming and, in some cases, infeasible. In this study, we aim to predict configuration parameters for unseen problem instances that yield lower-cost solutions without the time overhead of searching-and-evaluating configurations at the solving time. Toward that goal, we first investigate the cost correlation of MILP problem instances that come from the same distribution when solved using different configurations. We show that instances that have similar costs using one solver configuration also have similar costs using another solver configuration in the same runtime environment. After that, we present a methodology based on Deep Metric Learning to learn MILP similarities that correlate with their final solutions' costs. At inference time, given a new problem instance, it is first projected into the learned metric space using the trained model, and configuration parameters are instantly predicted using previously-explored configurations from the nearest neighbor instance in the learned embedding space. Empirical results on real-world problem benchmarks show that our method predicts configuration parameters that improve solutions' costs by up to 38% compared to existing approaches.
引用
收藏
页码:909 / 936
页数:28
相关论文
共 67 条
  • [61] Wang DY, 2024, AM J SPEECH-LANG PAT, V33, P642, DOI [10.1044/2023_AJSLP-23-00244, 10.1080/15548627.2023.2186099]
  • [62] Understanding the Behaviour of Contrastive Loss
    Wang, Feng
    Liu, Huaping
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2495 - 2504
  • [63] CosFace: Large Margin Cosine Loss for Deep Face Recognition
    Wang, Hao
    Wang, Yitong
    Zhou, Zheng
    Ji, Xing
    Gong, Dihong
    Zhou, Jingchao
    Li, Zhifeng
    Liu, Wei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5265 - 5274
  • [64] Wang R., 2021, Advances in Neural Information Processing Systems, V34
  • [65] Welling M., 2016, J INT C LEARN REPR
  • [66] Xie JY, 2016, PR MACH LEARN RES, V48
  • [67] Xu Lin, 2011, RCRA WORKSH EXP EV A, P16