Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
|
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Optimization Control of Adaptive Traffic Signal with Deep Reinforcement Learning
    Cao, Kerang
    Wang, Liwei
    Zhang, Shuo
    Duan, Lini
    Jiang, Guiminx
    Sfarra, Stefano
    Zhang, Hai
    Jung, Hoekyung
    ELECTRONICS, 2024, 13 (01)
  • [22] Digital twin-enabled quality control through deep learning in industry 4.0: a framework for enhancing manufacturing performance
    Aniba, Yehya
    Bouhedda, Mounir
    Bachene, Mourad
    Rahim, Messaoud
    Benyezza, Hamza
    Tobbal, Abdelhafid
    INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, 2024,
  • [23] ADWTune: an adaptive dynamic workload tuning system with deep reinforcement learning
    Li, Cuixia
    Wang, Junhai
    Shi, Jiahao
    Liu, Liqiang
    Zhang, Shuyan
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (04)
  • [24] Fluid dynamic control and optimization using deep reinforcement learning
    Innyoung Kim
    Donghyun You
    JMST Advances, 2024, 6 (1) : 61 - 65
  • [25] Enhancing Dynamic Production Scheduling and Resource Allocation Through Adaptive Control Systems with Deep Reinforcement Learning
    Aderoba, Olugbenga Adegbemisola
    Mpofu, Kluunbu Ani
    Adenuga, Olukorede Tijani
    Nzengue, Alliance Gracia Bibili
    PROCEEDINGS OF THE CONFERENCE ON PRODUCTION SYSTEMS AND LOGISTICS, CPSL 2024, 2024, : 814 - 827
  • [26] Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise Control
    Lin, Yuan
    McPhee, John
    Azad, Nasser L.
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2021, 6 (02): : 221 - 231
  • [27] Toward Adaptive Manufacturing: Scheduling Problems in the Context of Industry 4.0
    Nahhas, Abdulrahman
    Lang, Sebastian
    Bosse, Sascha
    Turowski, Klaus
    2018 SIXTH INTERNATIONAL CONFERENCE ON ENTERPRISE SYSTEMS (ES 2018), 2018, : 108 - 115
  • [28] Towards Network Dynamics: Adaptive Buffer Management with Deep Reinforcement Learning
    Zhu, Jing
    Wang, Dan
    Qin, Shuxin
    Tao, Gaofeng
    Gui, Hongxin
    Li, Fang
    Ou, Liang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 4935 - 4940
  • [29] Adaptive DAG Tasks Scheduling with Deep Reinforcement Learning
    Wu, Qing
    Wu, Zhiwei
    Zhuang, Yuehui
    Cheng, Yuxia
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2018, PT II, 2018, 11335 : 477 - 490
  • [30] Deep Reinforcement Learning for Adaptive Learning Systems
    Li, Xiao
    Xu, Hanchen
    Zhang, Jinming
    Chang, Hua-hua
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2023, 48 (02) : 220 - 243