Deep Reinforcement Learning for Dynamic Berth Allocation with Random Ship Arrivals

被引:0
|
作者
Zhou, Qianyu [1 ]
Wang, Peng [1 ]
Cao, Xiaohua [1 ]
机构
[1] Wuhan Univ Technol, Sch Transportat & Logist Engn, Wuhan, Peoples R China
来源
2024 6TH INTERNATIONAL CONFERENCE ON DATA-DRIVEN OPTIMIZATION OF COMPLEX SYSTEMS, DOCS 2024 | 2024年
关键词
Dynamic berth allocation; Double Dueling Deep Q-Network; Deep reinforcement learning; Intelligent scheduling;
D O I
10.1109/DOCS63458.2024.10704490
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the growth of global trade volume and advancements in information technology, the intelligent transformation of port systems has become a trend in the transportation industry. To address the unpredictable factors of dynamic ship arrivals, this paper utilizes a deep reinforcement learning (DRL) approach to solve the dynamic berth allocation problem (DBAP). The scheduling model aims to minimize the weighted waiting time of ships. The state space is constructed by extracting information about berths and ships in a dynamic environment. This paper proposes ship operation task buffers to map the flexible action space, and the optimization objective is decomposed by decision intervals to design the reward function. A double dueling deep Q-network (D3QN) algorithm, which integrates the advantages of DDQN and Dueling DQN, is used to solve the scheduling scheme. Finally, the network is trained with data to enable the agent to choose the optimal action based on the current state of the harbor berth system. The experimental results show that this method can effectively reduce ship waiting times in a dynamic environment, proving to be more advantageous than methods based on traditional dispatching rules.
引用
收藏
页码:799 / 805
页数:7
相关论文
共 50 条
  • [21] Deep Reinforcement Learning Based Resource Allocation for LoRaWAN
    Li, Aohan
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [22] Real-time scheduling for two-stage assembly flowshop with dynamic job arrivals by deep reinforcement learning
    Chen, Jian
    Zhang, Hanlei
    Ma, Wenjing
    Xu, Gangyan
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [23] Deep Reinforcement Learning for Scalable Dynamic Bandwidth Allocation in RAN Slicing With Highly Mobile Users
    Choi, Sihyun
    Choi, Siyoung
    Lee, Goodsol
    Yoon, Sung-Guk
    Bahk, Saewoong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (01) : 576 - 590
  • [24] Deep Reinforcement Learning for Dynamic Power Allocation in Cell-free mmWave Massive MIMO
    Zhao, Yu
    Niemegeers, Ignas
    de Groot, Sonia Heemstra
    PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON WIRELESS NETWORKS AND MOBILE SYSTEMS (WINSYS), 2021, : 33 - 45
  • [25] Dynamic Power Allocation for Cell-Free Massive MIMO: Deep Reinforcement Learning Methods
    Zhao, Yu
    Niemegeers, Ignas G.
    De Groot, Sonia M. Heemstra
    IEEE ACCESS, 2021, 9 (09) : 102953 - 102965
  • [26] Research on Ship Trajectory Control Based on Deep Reinforcement Learning
    Xu, Lixin
    Chen, Jiarong
    Hong, Zhichao
    Xu, Shengqing
    Zhang, Sheng
    Shi, Lin
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2025, 13 (04)
  • [27] DEEP REINFORCEMENT LEARNING FOR SHIP COLLISION AVOIDANCE AND PATH TRACKING
    Singht, Amar Nath
    Vijayakumar, Akash
    Balasubramaniyam, Shankruth
    Somayajula, Abhilash
    PROCEEDINGS OF ASME 2024 43RD INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2024, VOL 5B, 2024,
  • [28] Multitask Augmented Random Search in deep reinforcement learning
    Thanh, Le Tien
    Thang, Ta Bao
    Van Cuong, Le
    Binh, Huynh Thi Thanh
    APPLIED SOFT COMPUTING, 2024, 160
  • [29] Dynamic Positioning using Deep Reinforcement Learning
    Overeng, Simen Sem
    Nguyen, Dong Trong
    Hamre, Geir
    OCEAN ENGINEERING, 2021, 235
  • [30] Resource allocation for network slicing in dynamic multi-tenant networks: A deep reinforcement learning approach
    Xie, Yanghao
    Kong, Yuyang
    Huang, Lin
    Wang, Sheng
    Xu, Shizhong
    Wang, Xiong
    Ren, Jing
    COMPUTER COMMUNICATIONS, 2022, 195 : 476 - 487