Self-adapting WIP parameter setting using deep reinforcement learning

被引:4
|
作者
De Andrade e Silva, Manuel Tome [1 ]
Azevedo, Americo [1 ,2 ]
机构
[1] Univ Porto, Fac Engn, Porto, Portugal
[2] Inst Syst & Comp Engn, Technol & Sci, Porto, Portugal
关键词
WIP reduction; CONWIP; Deep reinforcement learning; WORKLOAD CONTROL; SYSTEMS; CONWIP; NUMBER; KANBANS; MULTIPRODUCT; THROUGHPUT; ALGORITHM; TIMES;
D O I
10.1016/j.cor.2022.105854
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This study investigates the potential of dynamically adjusting WIP cap levels to maximize the throughput (TH) performance and minimize work in process (WIP), according to real-time system state arising from process variability associated with low volume and high-variety production systems. Using an innovative approach based on state-of-the-art deep reinforcement learning (proximal policy optimization algorithm), we attain WIP reductions of up to 50% and 30%, with practically no losses in throughput, against pure-push systems and the statistical throughput control method (STC), respectively. An exploratory study based on simulation experiments was performed to provide support to our research. The reinforcement learning agent's performance was shown to be robust to variability changes within the production systems.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0
    Vespoli, Silvestro
    Mattera, Giulio
    Marchesano, Maria Grazia
    Nele, Luigi
    Guizzi, Guido
    COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 202
  • [2] Downlink Scheduler for Delay Guaranteed Services Using Deep Reinforcement Learning
    Ji, Jiequ
    Ren, Xiangyu
    Cai, Lin
    Zhu, Kun
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (04) : 3376 - 3390
  • [3] A deep recurrent Q network towards self-adapting distributed microservice architecture
    Magableh, Basel
    Almiani, Muder
    SOFTWARE-PRACTICE & EXPERIENCE, 2020, 50 (02): : 116 - 135
  • [4] Parallel Self-assembly for Modular Robots Using Deep Reinforcement Learning
    Mao, Yanbo
    Yao, Meibao
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT IV, 2025, 15204 : 258 - 272
  • [5] Deep Reinforcement Learning for Parameter Tuning of Robot Visual Servoing
    Xu, Meng
    Wang, Jianping
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (02)
  • [6] Towards Mitigating Straggler with Deep Reinforcement Learning in Parameter Server
    Lu, Haodong
    Wang, Kun
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 829 - 834
  • [7] Deep reinforcement learning enabled self-learning control for energy efficient driving
    Qi, Xuewei
    Luo, Yadan
    Wu, Guoyuan
    Boriboonsomsin, Kanok
    Barth, Matthew
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2019, 99 : 67 - 81
  • [8] Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement Learning
    Ren, Zhipeng
    Dong, Daoyi
    Li, Huaxiong
    Chen, Chunlin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) : 2216 - 2226
  • [9] Simultaneous task and energy planning using deep reinforcement learning
    Wang, Di
    Hu, Mengqi
    Weir, Jeffery D.
    INFORMATION SCIENCES, 2022, 607 : 931 - 946
  • [10] Lenovo Schedules Laptop Manufacturing Using Deep Reinforcement Learning
    Liang, Yi
    Sun, Zan
    Song, Tianheng
    Chou, Qiang
    Fan, Wei
    Fan, Jianping
    Rui, Yong
    Zhou, Qiping
    Bai, Jessie
    Yang, Chun
    Bai, Peng
    INFORMS JOURNAL ON APPLIED ANALYTICS, 2022, 52 (01): : 56 - 68