Deep Policy Dynamic Programming for Vehicle Routing Problems

被引:43
作者
Kool, Wouter [1 ,2 ]
van Hoof, Herke [1 ]
Gromicho, Joaquim [1 ,2 ]
Welling, Max [1 ]
机构
[1] Univ Amsterdam, Amsterdam, Netherlands
[2] ORTEC, Zoetermeer, Netherlands
来源
INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, CPAIOR 2022 | 2022年 / 13292卷
关键词
Dynamic Programming; Deep Learning; Vehicle Routing; TRAVELING SALESMAN PROBLEM; LARGE NEIGHBORHOOD SEARCH; ALGORITHM; OPTIMIZATION;
D O I
10.1007/978-3-031-08011-1_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Routing problems are a class of combinatorial problems with many practical applications. Recently, end-to-end deep learning methods have been proposed to learn approximate solution heuristics for such problems. In contrast, classical dynamic programming (DP) algorithms guarantee optimal solutions, but scale badly with the problem size. We propose Deep Policy Dynamic Programming (DPDP), which aims to combine the strengths of learned neural heuristics with those of DP algorithms. DPDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions. We evaluate our framework on the travelling salesman problem (TSP), the vehicle routing problem (VRP) and TSP with time windows (TSPTW) and show that the neural policy improves the performance of (restricted) DP algorithms, making them competitive to strong alternatives such as LKH, while also outperforming most other 'neural approaches' for solving TSPs, VRPs and TSPTWs with 100 nodes.
引用
收藏
页码:190 / 213
页数:24
相关论文
共 70 条
[51]  
O, 2015, ADV NEURAL INFORM PR, V28
[52]  
Paszke A, 2019, ADV NEUR IN, V32
[53]  
Peng Bo, 2020, Artificial Intelligence Algorithms and Applications: 11th International Symposium, ISICA 2019, Guangzhou, China, November 16-17, 2019, Revised Selected Papers. Communications in Computer and Information Science (1205), P636, DOI 10.1007/978-981-15-5577-0_51
[54]   An adaptive large neighborhood search heuristic for the pickup and delivery problem with time windows [J].
Ropke, Stefan ;
Pisinger, David .
TRANSPORTATION SCIENCE, 2006, 40 (04) :455-472
[55]   Record breaking optimization results using the ruin and recreate principle [J].
Schrimpf, G ;
Schneider, J ;
Stamm-Wilbrandt, H ;
Dueck, G .
JOURNAL OF COMPUTATIONAL PHYSICS, 2000, 159 (02) :139-171
[56]   A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play [J].
Silver, David ;
Hubert, Thomas ;
Schrittwieser, Julian ;
Antonoglou, Ioannis ;
Lai, Matthew ;
Guez, Arthur ;
Lanctot, Marc ;
Sifre, Laurent ;
Kumaran, Dharshan ;
Graepel, Thore ;
Lillicrap, Timothy ;
Simonyan, Karen ;
Hassabis, Demis .
SCIENCE, 2018, 362 (6419) :1140-+
[57]   Generalization of machine learning for problem reduction: a case study on travelling salesman problems [J].
Sun, Yuan ;
Ernst, Andreas ;
Li, Xiaodong ;
Weiner, Jake .
OR SPECTRUM, 2021, 43 (03) :607-633
[58]  
Toth P, 2014, MOS-SIAM SER OPTIMIZ, P1
[59]   New benchmark instances for the Capacitated Vehicle Routing Problem [J].
Uchoa, Eduardo ;
Pecin, Diego ;
Pessoa, Artur ;
Poggi, Marcus ;
Vidal, Thibaut ;
Subramanian, Anand .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2017, 257 (03) :845-858
[60]  
van Heeswijk W, 2019, PREPRINT