Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
|
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] A Review of Deep Reinforcement Learning Approaches for Smart Manufacturing in Industry 4.0 and 5.0 Framework
    del Real Torres, Alejandro
    Stefan Andreiana, Doru
    Ojeda Roldan, Alvaro
    Hernandez Bustos, Alfonso
    Acevedo Galicia, Luis Enrique
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [2] Automatic assembly cost control method of Industry 4.0 production line based on deep reinforcement learning
    Zhou H.
    International Journal of Manufacturing Technology and Management, 2022, 36 (5-6) : 352 - 367
  • [3] Optimization Planning Scheduling Problem in Industry 4.0 Using Deep Reinforcement Learning
    Terol, Marcos
    Gomez-Gasquet, Pedro
    Boza, Andres
    IOT AND DATA SCIENCE IN ENGINEERING MANAGEMENT, 2023, 160 : 136 - 140
  • [4] Enabling adaptable Industry 4.0 automation with a modular deep reinforcement learning framework
    Raziei, Zohreh
    Moghaddam, Mohsen
    IFAC PAPERSONLINE, 2021, 54 (01): : 546 - 551
  • [5] Self-adapting WIP parameter setting using deep reinforcement learning
    De Andrade e Silva, Manuel Tome
    Azevedo, Americo
    COMPUTERS & OPERATIONS RESEARCH, 2022, 144
  • [6] Deep Reinforcement Learning for Multiobjective Scheduling in Industry 5.0 Reconfigurable Manufacturing Systems
    Bezoui, Madani
    Kermali, Abdelfatah
    Bounceur, Ahcene
    Qaisar, Saeed Mian
    Almaktoom, Abdulaziz Turki
    MACHINE LEARNING FOR NETWORKING, MLN 2023, 2024, 14525 : 90 - 107
  • [7] Dynamic resource matching in manufacturing using deep reinforcement learning
    Panda, Saunak Kumar
    Xiang, Yisha
    Liu, Ruiqi
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 318 (02) : 408 - 423
  • [8] Adaptive Tuning of Dynamic Matrix Control for Uncertain Industrial Systems With Deep Reinforcement Learning
    Zhang, Yang
    Wang, Peng
    Yu, Liying
    Li, Ning
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024,
  • [9] Dynamic Adaptive Streaming Control based on Deep Reinforcement Learning in Named Data Networking
    Qiu, Shengyan
    Tan, Xiaobin
    Zhu, Jin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 9478 - 9482
  • [10] Distributed Deep Reinforcement Learning Resource Allocation Scheme For Industry 4.0 Device-To-Device Scenarios
    Burgueno, Jesus
    Adeogun, Ramoni
    Bruun, Rasmus Liborius
    Garcia, C. Santiago Morejon
    de-la-Bandera, Isabel
    Barco, Raquel
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,