Preference learning based deep reinforcement learning for flexible job shop scheduling problem

被引:3
作者
Liu, Xinning [1 ]
Han, Li [1 ]
Kang, Ling [2 ]
Liu, Jiannan [1 ]
Miao, Huadong [3 ]
机构
[1] Dalian Neusoft Univ Informat, Sch Comp & Software, Dalian 116023, Liaoning, Peoples R China
[2] Dalian Neusoft Univ Informat, Neusoft Res Inst, Dalian 116023, Liaoning, Peoples R China
[3] SNOW China Beijing Co Ltd, Dalian Branch, Dalian 116023, Liaoning, Peoples R China
关键词
Flexible job shop scheduling problem; Preference learning; Proximal policy optimization; Deep reinforcement learning; BENCHMARKS; ALGORITHM;
D O I
10.1007/s40747-024-01772-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The flexible job shop scheduling problem (FJSP) holds significant importance in both theoretical research and practical applications. Given the complexity and diversity of FJSP, improving the generalization and quality of scheduling methods has become a hot topic of interest in both industry and academia. To address this, this paper proposes a Preference-Based Mask-PPO (PBMP) algorithm, which leverages the strengths of preference learning and invalid action masking to optimize FJSP solutions. First, a reward predictor based on preference learning is designed to model reward prediction by comparing random fragments, eliminating the need for complex reward function design. Second, a novel intelligent switching mechanism is introduced, where proximal policy optimization (PPO) is employed to enhance exploration during sampling, and masked proximal policy optimization (Mask-PPO) refines the action space during training, significantly improving efficiency and solution quality. Furthermore, the Pearson correlation coefficient (PCC) is used to evaluate the performance of the preference model. Finally, comparative experiments on FJSP benchmark instances of varying sizes demonstrate that PBMP outperforms traditional scheduling strategies such as dispatching rules, OR-Tools, and other deep reinforcement learning (DRL) algorithms, achieving superior scheduling policies and faster convergence. Even with increasing instance sizes, preference learning proves to be an effective reward mechanism in reinforcement learning for FJSP. The ablation study further highlights the advantages of each key component in the PBMP algorithm across performance metrics.
引用
收藏
页数:23
相关论文
共 50 条
[21]   A deep reinforcement learning assisted adaptive genetic algorithm for flexible job shop scheduling [J].
Ma, Jian ;
Gao, Weinan ;
Tong, Weitian .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 149
[22]   Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning [J].
Luo, Shu .
APPLIED SOFT COMPUTING, 2020, 91
[23]   Optimizing the flexible job shop scheduling problem via deep reinforcement learning with mean multichannel graph attention [J].
Huang, Dailin ;
Zhao, Hong ;
Cao, Jie ;
Chen, Kangping ;
Zhang, Lijun .
APPLIED SOFT COMPUTING, 2025, 177
[24]   A deep reinforcement learning method based on a multiexpert graph neural network for flexible job shop scheduling [J].
Huang, Dailin ;
Zhao, Hong ;
Tian, Weiquan ;
Chen, Kangping .
COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 200
[25]   Knowledge-Based Reinforcement Learning and Estimation of Distribution Algorithm for Flexible Job Shop Scheduling Problem [J].
Du, Yu ;
Li, Jun-qing ;
Chen, Xiao-long ;
Duan, Pei-yong ;
Pan, Quan-ke .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (04) :1036-1050
[26]   A heterogeneous graph attention-enhanced deep reinforcement learning framework for flexible job shop scheduling problem with variable sublots [J].
Yang, Zipeng ;
Li, Xinyu ;
Gao, Liang ;
Liu, Qihao .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 157
[27]   Dynamic scheduling for flexible job shop using a deep reinforcement learning approach [J].
Gui, Yong ;
Tang, Dunbing ;
Zhu, Haihua ;
Zhang, Yi ;
Zhang, Zequn .
COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
[28]   Deep reinforcement learning for solving the joint scheduling problem of machines and AGVs in job shop [J].
Sun A.-H. ;
Lei Q. ;
Song Y.-C. ;
Yang Y.-F. .
Kongzhi yu Juece/Control and Decision, 2024, 39 (01) :253-262
[29]   Scheduling for the Flexible Job-Shop Problem with a Dynamic Number of Machines Using Deep Reinforcement Learning [J].
Chang, Yu-Hung ;
Liu, Chien-Hung ;
You, Shingchern D. .
INFORMATION, 2024, 15 (02)
[30]   A discrete event simulator to implement deep reinforcement learning for the dynamic flexible job shop scheduling problem [J].
Tiacci, Lorenzo ;
Rossi, Andrea .
SIMULATION MODELLING PRACTICE AND THEORY, 2024, 134