DHL: Deep reinforcement learning-based approach for emergency supply distribution in humanitarian logistics

被引:0
|
作者
Junchao Fan
Xiaolin Chang
Jelena Mišić
Vojislav B. Mišić
Hongyue Kang
机构
[1] Beijing Jiaotong University,Beijing Key Laboratory of Security and Privacy in Intelligent Transportation
[2] Ryerson University,undefined
来源
Peer-to-Peer Networking and Applications | 2022年 / 15卷
关键词
Deep reinforcement learning; Deep Q Network; Humanitarian logistics; Resource allocation; Emergency response;
D O I
暂无
中图分类号
学科分类号
摘要
Alleviating human suffering in disasters is one of the main objectives of humanitarian logistics. The lack of emergency rescue materials is the root cause of this suffering and must be considered when making emergency supply distribution decision. As large-scale disasters often cause varying degrees of damage to different influenced areas, which will cause differences in both human suffering and the demand for emergency supply in influenced areas. This paper considers a novel emergency supply distribution scenario in humanitarian logistics, which takes into account these differences. In the scenario, besides the economic goals such as minimizing costs, the humanitarian goal of alleviating the suffering of survivors is treated as one of the main bases of emergency supply distribution decision making. We first apply Markov Decision Process to establish the formulation of the emergency supply distribution problem. Then, to acquire the optimal resource allocation policy that can reduce the economic cost while decreasing the suffering of survivors, a Deep Q-Network-based approach for emergency supply distribution in Humanitarian Logistics (DHL) is developed. Numerical results demonstrate DHL has better performance and lower time complexity to solve the problem by comparing with other baselines.
引用
收藏
页码:2376 / 2389
页数:13
相关论文
共 50 条
  • [1] DHL: Deep reinforcement learning-based approach for emergency supply distribution in humanitarian logistics
    Fan, Junchao
    Chang, Xiaolin
    Misic, Jelena
    Misic, Vojislav B.
    Kang, Hongyue
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2022, 15 (05) : 2376 - 2389
  • [2] Reinforcement learning approach for resource allocation in humanitarian logistics
    Yu, Lina
    Zhang, Canrong
    Jiang, Jingyan
    Yang, Huasheng
    Shang, Huayan
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 173
  • [3] Computing on Wheels: A Deep Reinforcement Learning-Based Approach
    Kazmi, S. M. Ahsan
    Tai Manh Ho
    Tuong Tri Nguyen
    Fahim, Muhammad
    Khan, Adil
    Piran, Md Jalil
    Baye, Gaspard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 22535 - 22548
  • [4] OPTIMIZING HUMANITARIAN LOGISTICS WITH DEEP REINFORCEMENT LEARNING AND DIGITAL TWINS
    Soykan, Bulent
    Rabadia, Ghaith
    2024 ANNUAL MODELING AND SIMULATION CONFERENCE, ANNSIM 2024, 2024,
  • [5] Deep reinforcement learning-based approach for rumor influence minimization in social networks
    Jiang, Jiajian
    Chen, Xiaoliang
    Huang, Zexia
    Li, Xianyong
    Du, Yajun
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20293 - 20310
  • [6] A deep reinforcement learning-based approach for the residential appliances scheduling
    Li, Sichen
    Cao, Di
    Huang, Qi
    Zhang, Zhenyuan
    Chen, Zhe
    Blaabjerg, Frede
    Hu, Weihao
    ENERGY REPORTS, 2022, 8 : 1034 - 1042
  • [7] Deep reinforcement learning-based approach for rumor influence minimization in social networks
    Jiajian Jiang
    Xiaoliang Chen
    Zexia Huang
    Xianyong Li
    Yajun Du
    Applied Intelligence, 2023, 53 : 20293 - 20310
  • [8] Real-time operation of distribution network: A deep reinforcement learning-based reconfiguration approach
    Bui, Van-Hai
    Su, Wencong
    SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS, 2022, 50
  • [9] Analyzing transportation and distribution in emergency humanitarian logistics
    Safeer, M.
    Anbuudayasankar, S. P.
    Balkumar, Kartik
    Ganesh, K.
    12TH GLOBAL CONGRESS ON MANUFACTURING AND MANAGEMENT (GCMM - 2014), 2014, 97 : 2248 - 2258
  • [10] Energy Trading in Smart Grid: A Deep Reinforcement Learning-based Approach
    Zhang, Feiye
    Yang, Qingyu
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 3677 - 3682