Learning to sample initial solution for solving 0-1 discrete optimization problem by local search

被引:0
作者
Liu, Xin [1 ]
Sun, Jianyong [1 ]
Xu, Zongben [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Peoples R China
基金
中国国家自然科学基金;
关键词
discrete optimization; deep learning; graph convolutional network; local search; ALGORITHMS;
D O I
10.1007/s11425-023-2290-y
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Local search methods are convenient alternatives for solving discrete optimization problems (DOPs). These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit. It is known that the quality of the initial solution greatly affects the quality of the approximated solution found by a local search method. In this paper, we propose to take the initial solution as a random variable and learn its preferable probability distribution. The aim is to sample a good initial solution from the learned distribution so that the local search can find a high-quality solution. We develop two different deep network models to deal with DOPs established on set (the knapsack problem) and graph (the maximum clique problem), respectively. The deep neural network learns the representation of an optimization problem instance and transforms the representation to its probability vector. Experimental results show that given the initial solution sampled from the learned probability distribution, a local search method can acquire much better approximate solutions than the randomly-sampled initial solution on the synthesized knapsack instances and the Erdos-Renyi random graph instances. Furthermore, with sampled initial solutions, a classical genetic algorithm can achieve better solutions than a random initialized population in solving the maximum clique problems on DIMACS instances. Particularly, we emphasize that the developed models can generalize in dimensions and across graphs with various densities, which is an important advantage on generalizing deep-learning-based optimization algorithms.
引用
收藏
页码:2019 / 2048
页数:30
相关论文
共 70 条
[1]  
Abdel-Basset M., 2018, COMPUTATIONAL INTELL, P185, DOI [10.1016/B978-0-12-813314-9.00010-4, DOI 10.1016/B978-0-12-813314-9.00010-4, DOI 10.1016/B978-0-12-813314-9.00010-4.Z.B.T.-C.I]
[2]   Hyper-Reactive Tabu Search for MaxSAT [J].
Ansotegui, Carlos ;
Heymann, Britta ;
Pon, Josep ;
Sellmann, Meinolf ;
Tierney, Kevin .
LEARNING AND INTELLIGENT OPTIMIZATION, LION 12, 2019, 11353 :309-325
[3]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[4]  
Bello Irwan., 2016, 5 INT C LEARN REPR I
[5]   Machine learning for combinatorial optimization: A methodological tour d'horizon [J].
Bengio, Yoshua ;
Lodi, Andrea ;
Prouvost, Antoine .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2021, 290 (02) :405-421
[6]  
Bertsekas DimitriP., 2017, DYNAMIC PROGRAMMING, V1
[7]   FINDING ALL CLIQUES OF AN UNDIRECTED GRAPH [H] [J].
BRON, C ;
KERBOSCH, J .
COMMUNICATIONS OF THE ACM, 1973, 16 (09) :575-577
[8]  
Bruna J., 2013, SPECTRAL NETWORKS LO
[9]   Knapsack problems - An overview of recent advances. Part I: Single knapsack problems [J].
Cacchiani, Valentina ;
Iori, Manuel ;
Locatelli, Alberto ;
Martello, Silvano .
COMPUTERS & OPERATIONS RESEARCH, 2022, 143
[10]  
Chen Zibin, 2023, IEEE INFOCOM 2023 - IEEE Conference on Computer Communications, P1, DOI 10.1109/INFOCOM53939.2023.10228930