Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Reward-Based Exploration: Adaptive Control for Deep Reinforcement Learning
    Xu, Zhi-xiong
    Cao, Lei
    Chen, Xi-liang
    Li, Chen-xi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (09): : 2409 - 2412
  • [32] Adaptive traffic light control using deep reinforcement learning technique
    Ritesh Kumar
    Nistala Venkata Kameshwer Sharma
    Vijay K. Chaurasiya
    Multimedia Tools and Applications, 2024, 83 : 13851 - 13872
  • [33] Autonomous UAV Navigation with Adaptive Control Based on Deep Reinforcement Learning
    Yin, Yongfeng
    Wang, Zhetao
    Zheng, Lili
    Su, Qingran
    Guo, Yang
    ELECTRONICS, 2024, 13 (13)
  • [34] Adaptive Power System Emergency Control Using Deep Reinforcement Learning
    Huang, Qiuhua
    Huang, Renke
    Hao, Weituo
    Tan, Jie
    Fan, Rui
    Huang, Zhenyu
    IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (02) : 1171 - 1182
  • [35] Adaptive Broad Deep Reinforcement Learning for Intelligent Traffic Light Control
    Zhu, Ruijie
    Wu, Shuning
    Li, Lulu
    Ding, Wenting
    Lv, Ping
    Sui, Luyao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (17): : 28496 - 28507
  • [36] Job shop smart manufacturing scheduling by deep reinforcement learning
    Serrano-Ruiz, Julio C.
    Mula, Josefa
    Poler, Raul
    JOURNAL OF INDUSTRIAL INFORMATION INTEGRATION, 2024, 38
  • [37] Deep reinforcement learning for home energy management system control
    Lissa, Paulo
    Deane, Conor
    Schukat, Michael
    Seri, Federico
    Keane, Marcus
    Barrett, Enda
    ENERGY AND AI, 2021, 3
  • [38] Reinforcement learning-based adaptive production control of pull manufacturing systems
    Xanthopoulos, A. S.
    Chnitidis, G.
    Koulouriotis, D. E.
    JOURNAL OF INDUSTRIAL AND PRODUCTION ENGINEERING, 2019, 36 (05) : 313 - 323
  • [39] Adaptive traffic light control using deep reinforcement learning technique
    Kumar, Ritesh
    Sharma, Nistala Venkata Kameshwer
    Chaurasiya, Vijay K.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 13851 - 13872
  • [40] Geometric Deep Lean Learning: Deep Learning in Industry 4.0 Cyber-Physical Complex Networks
    Villalba-Diez, Javier
    Molina, Martin
    Ordieres-Mere, Joaquin
    Sun, Shengjing
    Schmidt, Daniel
    Wellbrock, Wanja
    SENSORS, 2020, 20 (03)