Real-time scheduling for dynamic workshops with random new job insertions by using deep reinforcement learning

被引:3
|
作者
Sun, Z. Y. [1 ,2 ]
Han, W. M. [1 ]
Gao, L. L. [1 ]
机构
[1] Jiangsu Univ Sci & Technol, Sch Econ & Management, Zhenjiang, Jiangsu, Peoples R China
[2] Pingdingshan Univ, Sch Software, Pingdingshan, Henan, Peoples R China
来源
关键词
Real-time scheduling; Machine learning; Deep reinforcement learning (DRL); Spatial pyramid pooling layer; Artificial neural networks (ANN); Convolutional neural networks (CNN); SELECTION;
D O I
10.14743/apem2023.2.462
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Dynamic real-time workshop scheduling on job arrival is critical for effective production. This study proposed a dynamic shop scheduling method integrating deep reinforcement learning and convolutional neural network (CNN). In this method, the spatial pyramid pooling layer was added to the CNN to achieve effective dynamic scheduling. A five-channel, two-dimensional matrix that expressed the state characteristics of the production system was used to capture the state of the real-time production of the workshop. Adaptive scheduling was achieved by using a reward function that corresponds to the minimum total tardiness, and the common production dispatching rules were used as the action space. The experimental results revealed that the proposed algorithm achieved superior optimization capabilities with lower time cost than that of the genetic algorithm and could adaptively select appropriate dispatching rules based on the state features of the production system.
引用
收藏
页码:137 / 151
页数:15
相关论文
共 50 条
  • [21] INTEGRATION OF DEEP REINFORCEMENT LEARNING AND DISCRETE-EVENT SIMULATION FOR REAL-TIME SCHEDULING OF A FLEXIBLE JOB SHOP PRODUCTION
    Lang, Sebastian
    Behrendt, Fabian
    Lanzerath, Nico
    Reggelin, Tobias
    Mueller, Marcel
    2020 WINTER SIMULATION CONFERENCE (WSC), 2020, : 3057 - 3068
  • [22] Real-time Scheduling using Reinforcement Learning Technique for the Connected Vehicles
    Park, Seongjin
    Yoo, Younghwan
    2018 IEEE 87TH VEHICULAR TECHNOLOGY CONFERENCE (VTC SPRING), 2018,
  • [23] Optimal real-time scheduling of battery operation using reinforcement learning
    Juarez, Carolina Quiroz
    Musilek, Petr
    2021 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2021,
  • [24] Real-time scheduling for a smart factory using a reinforcement learning approach
    Shiue, Yeou-Ren
    Lee, Ken-Chuan
    Su, Chao-Ton
    COMPUTERS & INDUSTRIAL ENGINEERING, 2018, 125 : 604 - 614
  • [25] Dynamic job-shop scheduling in smart manufacturing using deep reinforcement learning
    Wang, Libing
    Hu, Xin
    Wang, Yin
    Xu, Sujie
    Ma, Shijun
    Yang, Kexin
    Liu, Zhijun
    Wang, Weidong
    COMPUTER NETWORKS, 2021, 190 (190)
  • [26] Real-Time Scheduling for Flexible Job Shop With AGVs Using Multiagent Reinforcement Learning and Efficient Action Decoding
    Li, Yuxin
    Wang, Qingzheng
    Li, Xinyu
    Gao, Liang
    Fu, Ling
    Yu, Yanbin
    Zhou, Wei
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (03): : 2120 - 2132
  • [27] TTDeep: Time-Triggered Scheduling for Real-Time Ethernet via Deep Reinforcement Learning
    Jia, Hongyu
    Jiang, Yu
    Zhong, Chunmeng
    Wan, Hai
    Zhao, Xibin
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [28] Dynamic flexible job shop scheduling based on deep reinforcement learning
    Yang, Dan
    Shu, Xiantao
    Yu, Zhen
    Lu, Guangtao
    Ji, Songlin
    Wang, Jiabing
    He, Kongde
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART B-JOURNAL OF ENGINEERING MANUFACTURE, 2024,
  • [29] Deep reinforcement learning task scheduling method based on server real-time performance
    Wang, Jinming
    Li, Shaobo
    Zhang, Xingxing
    Wu, Fengbin
    Xie, Cankun
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [30] Real-time optimal scheduling for microgrid systems based on distributed deep reinforcement learning
    Guo F.-H.
    He T.
    Wu X.
    Dong H.
    Liu B.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (10): : 1881 - 1889