Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] A loosely-coupled deep reinforcement learning approach for order acceptance decision of mass-individualized printed circuit board manufacturing in industry 4.0
    Leng, Jiewu
    Ruan, Guolei
    Song, Yuan
    Liu, Qiang
    Fu, Yingbin
    Ding, Kai
    Chen, Xin
    JOURNAL OF CLEANER PRODUCTION, 2021, 280
  • [42] A Dynamic Adaptive Jamming Power Allocation Method Based on Deep Reinforcement Learning
    Peng X.
    Xu H.
    Jiang L.
    Zhang Y.
    Rao N.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (05): : 1223 - 1234
  • [43] Deep Learning vs. Discrete Reinforcement Learning for Adaptive Traffic Signal Control
    Shabestary, Soheil Mohamad Alizadeh
    Abdulhai, Baher
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 286 - 293
  • [44] Quadrotor navigation in dynamic environments with deep reinforcement learning
    Fang, Jinbao
    Sun, Qiyu
    Chen, Yukun
    Tang, Yang
    ASSEMBLY AUTOMATION, 2021, 41 (03) : 254 - 262
  • [45] USV Path-Following Control Based On Deep Reinforcement Learning and Adaptive Control
    Gonzalez-Garcia, Alejandro
    Castaneda, Herman
    Garrido, Leonardo
    GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [46] Dynamic job-shop scheduling in smart manufacturing using deep reinforcement learning
    Wang, Libing
    Hu, Xin
    Wang, Yin
    Xu, Sujie
    Ma, Shijun
    Yang, Kexin
    Liu, Zhijun
    Wang, Weidong
    COMPUTER NETWORKS, 2021, 190 (190)
  • [47] Evaluating Centralized and Heterarchical Control of Smart Manufacturing Systems in the Era of Industry 4.0
    Boccella, Anna Rosaria
    Centobelli, Piera
    Cerchione, Roberto
    Murino, Teresa
    Riedel, Ralph
    APPLIED SCIENCES-BASEL, 2020, 10 (03):
  • [48] Towards synchronization-oriented manufacturing planning and control for Industry 4.0 and beyond
    Guo, Daqiang
    Ling, Shiquan
    Rong, Yiming
    Huang, George Q.
    IFAC PAPERSONLINE, 2022, 55 (02): : 163 - 168
  • [49] Deep Adaptive Control: Deep Reinforcement Learning-Based Adaptive Vehicle Trajectory Control Algorithms for Different Risk Levels
    He, Yixu
    Liu, Yang
    Yang, Lan
    Qu, Xiaobo
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1654 - 1666
  • [50] Deep reinforcement learning towards real-world dynamic thermal management of data centers
    Zhang, Qingang
    Zeng, Wei
    Lin, Qinjie
    Chng, Chin-Boon
    Chui, Chee-Kong
    Lee, Poh-Seng
    APPLIED ENERGY, 2023, 333