A Hierarchical Multi-Action Deep Reinforcement Learning Method for Dynamic Distributed Job-Shop Scheduling Problem With Job Arrivals

被引:13
作者
Huang, Jiang-Ping [1 ]
Gao, Liang [1 ]
Li, Xin-Yu [1 ]
机构
[1] Huazhong Univ Sci & Technol, State Key Lab Intelligent Mfg Equipment & Technol, Wuhan 430074, Peoples R China
基金
美国国家科学基金会;
关键词
Deep reinforcement learning; dynamic scheduling; distributed scheduling; job-shop scheduling problem; GENETIC ALGORITHM; DISPATCHING RULES;
D O I
10.1109/TASE.2024.3380644
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Distributed Job-shop Scheduling Problem (DJSP) is a significant issue in both academic and industrial fields. In real-world production, uncertain disturbances such as job arrivals are inevitable. In the paper, the DJSP with job arrivals is addressed with a Multi-action Deep Reinforcement Learning (MDRL) method. Firstly, a multi-action Markov Decision Process (MDP) is formulated, where a hierarchical multi-action space combining operation set and factory set is proposed. The reward function is related to the machine idle time. Additionally, the state transition is also elaborately designed, which includes four typical cases based on job arrival times. Then, a scheduling policy with two decision networks is proposed, where the Graph Neural Network (GNN) is applied to extract the intrinsic information of the scheduling scheme. A Proximal Policy Optimization (PPO) with two actor-critic frameworks is designed to train the model to achieve intelligent decision-making with hierarchical action selections. Extensive experiments are conducted based on 1350 instances. The comparison among 17 composite rules, 3 closely-rated DRL methods, and 2 metaheuristics has proven the outperformance of the proposed MDRL. The application of the MDRL in an automotive engine manufacturing company has demonstrated its engineering value in the industrial field. Note to Practitioners-The DJSP with job arrivals is a common challenge faced by equipment manufacturers, specifically in the electronic device manufacturing industry. These manufacturers are located in different areas and have varying facility configurations and operation trajectories. To address this challenge, a machine learning-based method can be applied for scheduling daily production tasks. This method divides the DJSP into two subproblems, namely job assigning and job sequencing, and uses two decision networks based on DRL to solve them. To address the uncertainty caused by job arrivals, the rescheduling process and the state update mechanism are carefully designed. A GNN is used for feature extraction at each decision point, and it feeds the decision networks with the extracted features to make the optimal selection. The proposed method has the ability of self-learning and self-adapting, and its effectiveness has been proven through experiments on 1350 test instances. Its practical application has been demonstrated in the production scenarios of an automotive engine manufacturing company. In the future, the method can be adopted to solve more complex distributed manufacturing problems that have constraints such as transportation costs and machine breakdowns.
引用
收藏
页码:2501 / 2513
页数:13
相关论文
共 49 条
[21]   Large-Scale Dynamic Scheduling for Flexible Job-Shop With Random Arrivals of New Jobs by Hierarchical Reinforcement Learning [J].
Lei, Kun ;
Guo, Peng ;
Wang, Yi ;
Zhang, Jian ;
Meng, Xiangyin ;
Qian, Linmao .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (01) :1007-1018
[22]   A multi-action deep reinforcement learning framework for flexible Job-shop scheduling problem [J].
Lei, Kun ;
Guo, Peng ;
Zhao, Wenchao ;
Wang, Yi ;
Qian, Linmao ;
Meng, Xiangyin ;
Tang, Liansheng .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 205
[23]  
Li X. Y., 2020, Effective Methods for Integrated Process Planning and Scheduling (Engineering Applications of Compu tational Methods), P377
[24]   Dynamic Job-Shop Scheduling Problems Using Graph Neural Network and Deep Reinforcement Learning [J].
Liu, Chien-Liang ;
Huang, Tzu-Hsuan .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (11) :6836-6848
[25]   A Multi-MILP Model Collaborative Optimization Method for Integrated Process Planning and Scheduling Problem [J].
Liu, Qihao ;
Li, Xinyu ;
Gao, Liang ;
Fan, Jiaxin .
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, 2024, 71 :4574-4586
[26]   Deep reinforcement learning for dynamic scheduling of a flexible job shop [J].
Liu, Renke ;
Piplani, Rajesh ;
Toro, Carlos .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2022, 60 (13) :4049-4069
[27]  
Liu yahui, 2022, Journal of Shanghai Jiao Tong University, P1262, DOI 10.16183/j.cnki.jsjtu.2021.215
[28]   GPU based parallel genetic algorithm for solving an energy efficient dynamic flexible flow shop scheduling problem [J].
Luo, Jia ;
Fujimura, Shigeru ;
El Baz, Didier ;
Plazolles, Bastien .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2019, 133 :244-257
[29]   Real-Time Scheduling for Dynamic Partial-No-Wait Multiobjective Flexible Job Shop by Deep Reinforcement Learning [J].
Luo, Shu ;
Zhang, Linxuan ;
Fan, Yushun .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2022, 19 (04) :3020-3038
[30]   Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning [J].
Luo, Shu .
APPLIED SOFT COMPUTING, 2020, 91