Intelligent Scheduling Method for Bulk Cargo Terminal Loading Process Based on Deep Reinforcement Learning

被引:10
作者
Li, Changan [1 ,2 ]
Wu, Sirui [3 ]
Li, Zhan [3 ,4 ]
Zhang, Yuxiao [3 ]
Zhang, Lijie [1 ]
Gomes, Luis [5 ]
机构
[1] Yanshan Univ, Key Lab Adv Forging & Stamping Technol & Sci, Minist Educ China, Qinhuangdao 066004, Hebei, Peoples R China
[2] Chnenergy Tianjin Port Co Ltd, Tianjin 300450, Peoples R China
[3] Harbin Inst Technol, Res Inst Intelligent Control & Syst, Harbin 150001, Peoples R China
[4] Ningbo Inst Intelligent Equipment Technol Co Ltd, Ningbo 315201, Peoples R China
[5] NOVA Univ Lisbon, NOVA Sch Sci & Technol, Ctr Technol & Syst, P-2829516 Monte De Caparica, Portugal
基金
中国国家自然科学基金;
关键词
bulk cargo loading; MDP model; deep reinforcement learning; intelligent scheduling; BERTH ALLOCATION PROBLEM; OPTIMIZATION; SYSTEM;
D O I
10.3390/electronics11091390
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sea freight is one of the most important ways for the transportation and distribution of coal and other bulk cargo. This paper proposes a method for optimizing the scheduling efficiency of the bulk cargo loading process based on deep reinforcement learning. The process includes a large number of states and possible choices that need to be taken into account, which are currently performed by skillful scheduling engineers on site. In terms of modeling, we extracted important information based on actual working data of the terminal to form the state space of the model. The yard information and the demand information of the ship are also considered. The scheduling output of each convey path from the yard to the cabin is the action of the agent. To avoid conflicts of occupying one machine at same time, certain restrictions are placed on whether the action can be executed. Based on Double DQN, an improved deep reinforcement learning method is proposed with a fully connected network structure and selected action sets according to the value of the network and the occupancy status of environment. To make the network converge more quickly, an improved new epsilon-greedy exploration strategy is also proposed, which uses different exploration rates for completely random selection and feasible random selection of actions. After training, an improved scheduling result is obtained when the tasks arrive randomly and the yard state is random. An important contribution of this paper is to integrate the useful features of the working time of the bulk cargo terminal into a state set, divide the scheduling process into discrete actions, and then reduce the scheduling problem into simple inputs and outputs. Another major contribution of this article is the design of a reinforcement learning algorithm for the bulk cargo terminal scheduling problem, and the training efficiency of the proposed algorithm is improved, which provides a practical example for solving bulk cargo terminal scheduling problems using reinforcement learning.
引用
收藏
页数:18
相关论文
共 50 条
[21]   Deep Reinforcement Learning for Task Scheduling in Intelligent Building Edge Network [J].
Chen, Yuhao ;
Zhang, Zhe ;
Wang, Huixue ;
Wang, Yunzhe ;
Fu, Qiming ;
Lu, You .
2022 TENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA, CBD, 2022, :312-317
[22]   A Method of Path Planning and Intelligent Exploration for Robot Based on Deep Reinforcement Learning [J].
Lyu, Xianglin ;
Zang, Zhaoxiang ;
Li, Sibo .
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
[23]   Reliable Scheduling Method for Sensitive Power Business Based on Deep Reinforcement Learning [J].
Guo, Shen ;
Lin, Jiaying ;
Bai, Shuaitao ;
Zhang, Jichuan ;
Wang, Peng .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 37 (01) :1053-1066
[24]   Deep Reinforcement Learning-Based Intelligent Traffic Scheduling in Software-Defined Networks [J].
Xie, Baoxing .
Informatica (Slovenia), 2025, 49 (22) :145-165
[25]   Intelligent Scheduling of Distributed Photovoltaic EV Complementary Systems Based on Deep Reinforcement Learning Algorithm [J].
Chen, Ning ;
Li, Fashe ;
Wang, Shuang ;
Zhang, Huicong ;
Tang, Cunjin ;
Ni, Zihao .
Gaodianya Jishu/High Voltage Engineering, 2025, 51 (03) :1454-1463
[26]   Cargo Terminal Intelligent-Scheduling Strategies Based on Improved Bee Colony Algorithms [J].
Wang, Haiquan ;
Su, Menghao ;
Xu, Xiaobin ;
Haasis, Hans-Dietrich ;
Zhao, Ran ;
Wen, Shengjun ;
Wang, Yan .
APPLIED SCIENCES-BASEL, 2023, 13 (15)
[27]   An Intelligent Algorithm for Solving Unit Commitments Based on Deep Reinforcement Learning [J].
Huang, Guanglei ;
Mao, Tian ;
Zhang, Bin ;
Cheng, Renli ;
Ou, Mingyu .
SUSTAINABILITY, 2023, 15 (14)
[28]   Revising the Observation Satellite Scheduling Problem Based on Deep Reinforcement Learning [J].
Huang, Yixin ;
Mu, Zhongcheng ;
Wu, Shufan ;
Cui, Benjie ;
Duan, Yuxiao .
REMOTE SENSING, 2021, 13 (12)
[29]   Scheduling of twin automated stacking cranes based on Deep Reinforcement Learning [J].
Jin, Xin ;
Mi, Nan ;
Song, Wen ;
Li, Qiqiang .
COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 191
[30]   Intelligent Decision-Making of Scheduling for Dynamic Permutation Flowshop via Deep Reinforcement Learning [J].
Yang, Shengluo ;
Xu, Zhigang ;
Wang, Junyi .
SENSORS, 2021, 21 (03) :1-21