Heterogeneous Attentions for Solving Pickup and Delivery Problem via Deep Reinforcement Learning

被引:80
|
作者
Li, Jingwen [1 ]
Xin, Liang [2 ]
Cao, Zhiguang [1 ]
Lim, Andrew [1 ]
Song, Wen [3 ]
Zhang, Jie [2 ]
机构
[1] Natl Univ Singapore, Dept Ind Syst Engn & Management, Singapore 119077, Singapore
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[3] Shandong Univ, Inst Marine Sci & Technol, Qingdao 266237, Peoples R China
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Reinforcement learning; Routing; Peer-to-peer computing; Heuristic algorithms; Deep learning; Decoding; Decision making; Heterogeneous attention; deep reinforcement learning; pickup and delivery problem; VEHICLE; OPTIMIZATION; BRANCH; CUT;
D O I
10.1109/TITS.2021.3056120
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Recently, there is an emerging trend to apply deep reinforcement learning to solve the vehicle routing problem (VRP), where a learnt policy governs the selection of next node for visiting. However, existing methods could not handle well the pairing and precedence relationships in the pickup and delivery problem (PDP), which is a representative variant of VRP. To address this challenging issue, we leverage a novel neural network integrated with a heterogeneous attention mechanism to empower the policy in deep reinforcement learning to automatically select the nodes. In particular, the heterogeneous attention mechanism specifically prescribes attentions for each role of the nodes while taking into account the precedence constraint, i.e., the pickup node must precede the pairing delivery node. Further integrated with a masking scheme, the learnt policy is expected to find higher-quality solutions for solving PDP. Extensive experimental results show that our method outperforms the state-of-the-art heuristic and deep learning model, respectively, and generalizes well to different distributions and problem sizes.
引用
收藏
页码:2306 / 2315
页数:10
相关论文
共 50 条
  • [1] Routing with Pickup and Delivery via Deep Reinforcement Learning
    Yildiz, Ozge Aslan
    Saricicek, Inci
    Ozkan, Kemal
    Yazici, Ahmet
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [2] Reinforcement Learning for the Pickup and Delivery Problem
    Liu, Fagui
    Lai, Chengqi
    Wang, Lvshengbiao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II, 2022, 13530 : 87 - 98
  • [3] Deep Reinforcement Learning for the Capacitated Pickup and Delivery Problem with Time Windows
    Soroka, A. G.
    Meshcheryakov, A. V.
    Gerasimov, S. V.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2023, 33 (02) : 169 - 178
  • [4] Deep Reinforcement Learning for the Capacitated Pickup and Delivery Problem with Time Windows
    A. G. Soroka
    A. V. Meshcheryakov
    S. V. Gerasimov
    Pattern Recognition and Image Analysis, 2023, 33 : 169 - 178
  • [5] Deep Reinforcement Learning for Solving the Heterogeneous Capacitated Vehicle Routing Problem
    Li, Jingwen
    Ma, Yining
    Gao, Ruize
    Cao, Zhiguang
    Lim, Andrew
    Song, Wen
    Zhang, Jie
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13572 - 13585
  • [6] Solving the train dispatching problem via deep reinforcement learning
    Agasucci, Valerio
    Grani, Giorgio
    Lamorgese, Leonardo
    JOURNAL OF RAIL TRANSPORT PLANNING & MANAGEMENT, 2023, 26
  • [7] Solving Maximal Stable Set Problem via Deep Reinforcement Learning
    Wang, Taiyi
    Shi, Jiahao
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 483 - 489
  • [8] Deep reinforcement learning solve the task fairness-oriented flexible pickup and delivery problem
    Tian, Ran
    Sun, Zhihui
    Lu, Xin
    Li, Zhaobin
    Chang, Longlong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 278
  • [9] SOLVING THE VEHICLE-DRONE PICKUP AND DELIVERY PROBLEM IN ROAD CONGESTION: A HEURISTIC AND ITS DEEP REINFORCEMENT LEARNING-BASED IMPROVEMENT
    Yang, Xiwang
    He, Zhichao
    Liu, Ya
    Liu, Shulin
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2025, 21 (02) : 1630 - 1654