EFJS']JSP-IBDR: Energy-efficient flexible job shop scheduling under incentive-based demand response via graph reinforcement learning with dual attention mechanism

被引:1
作者
Liu, Mingzhou [1 ]
Rui, Zhangjie [1 ]
Zhang, Xi [1 ]
Ge, Maogen [1 ]
Ling, Lin [1 ,2 ]
Wang, Xiaoqiao [1 ]
Liu, Conghu [3 ]
机构
[1] Hefei Univ Technol, Sch Mech Engn, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Anhui Key Lab Digital Design & Mfg, Hefei 230009, Peoples R China
[3] Suzhou Univ, Sch Mech & Elect Engn, Suzhou 234000, Peoples R China
基金
国家自然科学基金国际合作与交流项目; 中国国家自然科学基金;
关键词
Flexible job shop scheduling; Energy-efficient manufacturing; Incentive-based demand response; Graph reinforcement learning; Dual attention mechanism; OPTIMIZATION; MOEA/D;
D O I
10.1016/j.eswa.2025.127340
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As energy demand grows and the penetration of intermittent renewable energy, optimizing energy consumption patterns in industrial production processes has become an increasing challenge for sustainable manufacturing. This study investigates an energy-efficient flexible job shop scheduling problem with incentive-based demand response (EFJSP-IBDR), aiming to enhance energy consumption efficiency and cost-effectiveness. Given the time-sensitive and complex state features of EFJSP-IBDR, we propose an end-to-end graph reinforcement learning (GRL) approach. Initially, we decompose EFJSP-IBDR into a set of preference-based subproblems to facilitate multi-objective optimization. On this basis, a Markov decision process (MDP) with heterogeneous graph states is employed to model these subproblems. Subsequently, we construct graph neural network schedulers that integrate dual attention mechanisms to extract the features necessary for decision-making in each subproblem. Additionally, a similarity-based parameter transfer strategy is designed to accelerate the training process of all schedulers. Empirical results demonstrate that our approach can reduce peak demand by 47.22% and average load by 51.01% during the manufacturing cycle. The effectiveness of the proposed method suggests its potential to improve load-side energy consumption flexibility and grid resilience.
引用
收藏
页数:18
相关论文
共 66 条
[51]   Fast Pareto set approximation for multi-objective flexible job shop scheduling via parallel preference-conditioned graph reinforcement learning [J].
Su, Chupeng ;
Zhang, Cong ;
Wang, Chuang ;
Cen, Weihong ;
Chen, Gang ;
Xie, Longhan .
SWARM AND EVOLUTIONARY COMPUTATION, 2024, 88
[52]  
Veličkovic P, 2018, Arxiv, DOI [arXiv:1710.10903, DOI 10.48550/ARXIV.1710.10903, 10.48550/arXiv.1710.10903]
[53]   Multi-objective reinforcement learning framework for dynamic flexible job shop scheduling problem with uncertain events [J].
Wang, Hao ;
Cheng, Junfu ;
Liu, Chang ;
Zhang, Yuanyuan ;
Hu, Shunfang ;
Chen, Liangyin .
APPLIED SOFT COMPUTING, 2022, 131
[54]   Bounding Box Vectorization for Oriented Object Detection With Tanimoto Coefficient Regression [J].
Wang, Linfei ;
Zhan, Yibing ;
Liu, Wei ;
Yu, Baosheng ;
Tao, Dapeng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 :5181-5193
[55]   Heterogeneous Graph Attention Network [J].
Wang, Xiao ;
Ji, Houye ;
Shi, Chuan ;
Wang, Bai ;
Cui, Peng ;
Yu, P. ;
Ye, Yanfang .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :2022-2032
[56]   Multiobjective Combinatorial Optimization Using a Single Deep Reinforcement Learning Model [J].
Wang, Zhenkun ;
Yao, Shunyu ;
Li, Genghui ;
Zhang, Qingfu .
IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (03) :1984-1996
[57]   Hybrid energy-efficient scheduling measures for flexible job-shop problem with variable machining speeds [J].
Wei, Zhenzhen ;
Liao, Wenzhu ;
Zhang, Liuyang .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 197
[58]   Scheduling a storage-augmented discrete production facility under incentive-based demand response [J].
Weitzel, Timm ;
Glock, Christoph H. .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2019, 57 (01) :250-270
[59]   Energy-efficient single-machine scheduling with release dates under time-of-use electricity tariffs [J].
Wu, Peng ;
Li, Nan ;
Cheng, Junheng ;
Chu, Chengbin .
JOURNAL OF CLEANER PRODUCTION, 2023, 393
[60]   Explainable multi-agent deep reinforcement learning for real-time demand response towards sustainable manufacturing [J].
Yun, Lingxiang ;
Wang, Di ;
Li, Lin .
APPLIED ENERGY, 2023, 347