Preference learning based deep reinforcement learning for flexible job shop scheduling problem

被引:3
作者
Liu, Xinning [1 ]
Han, Li [1 ]
Kang, Ling [2 ]
Liu, Jiannan [1 ]
Miao, Huadong [3 ]
机构
[1] Dalian Neusoft Univ Informat, Sch Comp & Software, Dalian 116023, Liaoning, Peoples R China
[2] Dalian Neusoft Univ Informat, Neusoft Res Inst, Dalian 116023, Liaoning, Peoples R China
[3] SNOW China Beijing Co Ltd, Dalian Branch, Dalian 116023, Liaoning, Peoples R China
关键词
Flexible job shop scheduling problem; Preference learning; Proximal policy optimization; Deep reinforcement learning; BENCHMARKS; ALGORITHM;
D O I
10.1007/s40747-024-01772-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The flexible job shop scheduling problem (FJSP) holds significant importance in both theoretical research and practical applications. Given the complexity and diversity of FJSP, improving the generalization and quality of scheduling methods has become a hot topic of interest in both industry and academia. To address this, this paper proposes a Preference-Based Mask-PPO (PBMP) algorithm, which leverages the strengths of preference learning and invalid action masking to optimize FJSP solutions. First, a reward predictor based on preference learning is designed to model reward prediction by comparing random fragments, eliminating the need for complex reward function design. Second, a novel intelligent switching mechanism is introduced, where proximal policy optimization (PPO) is employed to enhance exploration during sampling, and masked proximal policy optimization (Mask-PPO) refines the action space during training, significantly improving efficiency and solution quality. Furthermore, the Pearson correlation coefficient (PCC) is used to evaluate the performance of the preference model. Finally, comparative experiments on FJSP benchmark instances of varying sizes demonstrate that PBMP outperforms traditional scheduling strategies such as dispatching rules, OR-Tools, and other deep reinforcement learning (DRL) algorithms, achieving superior scheduling policies and faster convergence. Even with increasing instance sizes, preference learning proves to be an effective reward mechanism in reinforcement learning for FJSP. The ablation study further highlights the advantages of each key component in the PBMP algorithm across performance metrics.
引用
收藏
页数:23
相关论文
共 50 条
[11]   Deep reinforcement learning for dynamic scheduling of a flexible job shop [J].
Liu, Renke ;
Piplani, Rajesh ;
Toro, Carlos .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2022, 60 (13) :4049-4069
[12]   Deep Reinforcement Learning Based on Graph Neural Network for Flexible Job Shop Scheduling Problem with Lot Streaming [J].
He, Junchao ;
Li, Junqing .
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT V, ICIC 2024, 2024, 14879 :85-95
[13]   Dynamic flexible job shop scheduling algorithm based on deep reinforcement learning [J].
Zhao, Tianrui ;
Wang, Yanhong ;
Tan, Yuanyuan ;
Zhang, Jun .
2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, :5099-5104
[14]   Flexible Job Shop Scheduling with Job Precedence Constraints: A Deep Reinforcement Learning Approach [J].
Li, Yishi ;
Yu, Chunlong .
JOURNAL OF MANUFACTURING AND MATERIALS PROCESSING, 2025, 9 (07)
[15]   Solving Flexible Job-Shop Scheduling Problem with Heterogeneous Graph Neural Network Based on Relation and Deep Reinforcement Learning [J].
Tang, Hengliang ;
Dong, Jinda .
MACHINES, 2024, 12 (08)
[16]   Solving flexible job shop scheduling problems via deep reinforcement learning [J].
Yuan, Erdong ;
Wang, Liejun ;
Cheng, Shuli ;
Song, Shiji ;
Fan, Wei ;
Li, Yongming .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
[17]   Dynamic scheduling for multi-objective flexible job shop via deep reinforcement learning [J].
Yuan, Erdong ;
Wang, Liejun ;
Song, Shiji ;
Cheng, Shuli ;
Fan, Wei .
APPLIED SOFT COMPUTING, 2025, 171
[18]   Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning [J].
Zhang, Cong ;
Song, Wen ;
Cao, Zhiguang ;
Zhang, Jie ;
Tan, Puay Siew ;
Xu, Chi .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
[19]   A Multi -Action Deep Reinforcement Learning Based on BiLSTM for Flexible Job Shop Scheduling Problem with Tight Time [J].
Wang, Rui ;
Liu, Chang ;
Wang, Xinzhuo ;
Yang, Shengxiang ;
Hou, Yaqi .
PROCEEDINGS OF 2024 8TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ARTIFICIAL INTELLIGENCE, CSAI 2024, 2024, :318-326
[20]   Flexible Job Shop Scheduling Problem using graph neural networks and reinforcement learning [J].
Liu, Xi ;
Chen, Xin ;
Chau, Vincent ;
Musial, Jedrzej ;
Blazewicz, Jacek .
COMPUTERS & OPERATIONS RESEARCH, 2025, 182