共 34 条
A Reinforcement Learning Based Large-Scale Refinery Production Scheduling Algorithm
被引:6
作者:
Chen, Yuandong
[1
,2
]
Ding, Jinliang
[1
]
Chen, Qingda
[1
]
机构:
[1] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
[2] Fujian Univ Technol, Sch Transportat, Fuzhou 350118, Peoples R China
基金:
中国国家自然科学基金;
中国博士后科学基金;
关键词:
Production;
Job shop scheduling;
Mathematical models;
Oils;
Reinforcement learning;
Optimization;
Petroleum;
Large-scale optimization;
reinforcement learning;
refinery;
scheduling;
LAGRANGIAN DECOMPOSITION APPROACH;
OPERATIONAL TRANSITIONS;
TIME;
OPTIMIZATION;
D O I:
10.1109/TASE.2023.3321612
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
Refinery production scheduling is a mixed-integer programming problem, which exists the issue of combinational explosion. Thus, solving a large-scale refinery production scheduling problem is time-consuming. This article proposes an approximate solution framework based on reinforcement learning (RL) for large-scale long-time refinery production scheduling problems to rapidly obtain a satisfactory solution. In the proposed algorithm, the Proximal Policy Optimization algorithm is used to process the continuous action. To address the cold start issue of RL in refinery scheduling problem, we present an initialization method for the actor of agent, which utilizes the operation knowledge of tractable small-scale problems to initialize the actor network, and the agent is trained in the environment of large-scale problems. Hence, the convergence of the RL algorithm is greatly accelerated. In addition, the product flowrate concept is used to express the state, making the scheduling agent scalable in terms of scheduling horizon. Experimental studies show, to large-scale refinery scheduling problems, the proposed algorithm can obtain better solutions than that of the CPLEX solver and the existing evolutionary algorithm in a much shorter solving time of the two methods.
引用
收藏
页码:6041 / 6055
页数:15
相关论文