Self-adapting WIP parameter setting using deep reinforcement learning

被引:4
|
作者
De Andrade e Silva, Manuel Tome [1 ]
Azevedo, Americo [1 ,2 ]
机构
[1] Univ Porto, Fac Engn, Porto, Portugal
[2] Inst Syst & Comp Engn, Technol & Sci, Porto, Portugal
关键词
WIP reduction; CONWIP; Deep reinforcement learning; WORKLOAD CONTROL; SYSTEMS; CONWIP; NUMBER; KANBANS; MULTIPRODUCT; THROUGHPUT; ALGORITHM; TIMES;
D O I
10.1016/j.cor.2022.105854
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This study investigates the potential of dynamically adjusting WIP cap levels to maximize the throughput (TH) performance and minimize work in process (WIP), according to real-time system state arising from process variability associated with low volume and high-variety production systems. Using an innovative approach based on state-of-the-art deep reinforcement learning (proximal policy optimization algorithm), we attain WIP reductions of up to 50% and 30%, with practically no losses in throughput, against pure-push systems and the statistical throughput control method (STC), respectively. An exploratory study based on simulation experiments was performed to provide support to our research. The reinforcement learning agent's performance was shown to be robust to variability changes within the production systems.
引用
收藏
页数:14
相关论文
共 50 条
  • [11] A Deep Reinforcement Learning Method for Self-driving
    Fang, Yong
    Gu, Jianfeng
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT II, 2018, 10955 : 143 - 152
  • [12] Domain Adapting Deep Reinforcement Learning for Real-World Speech Emotion Recognition
    Rajapakshe, Thejan
    Rana, Rajib
    Khalifa, Sara
    Schuller, Bjoern W.
    IEEE ACCESS, 2024, 12 : 193101 - 193114
  • [13] Automatic Parameter Tuning for Big Data Pipelines with Deep Reinforcement Learning
    Sagaama, Houssem
    Ben Slimane, Nourchene
    Marwani, Maher
    Skhiri, Sabri
    26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [14] Deep Reinforcement Learning for Adaptive Parameter Control in Differential Evolution for Multi-Objective Optimization
    Reijnen, Robbert
    Zhang, Yingqian
    Bukhsh, Zaharah
    Guzek, Mateusz
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 804 - 811
  • [15] Tetris Bot using Deep Reinforcement Learning
    Park K.-W.
    Kim J.-S.
    Journal of Institute of Control, Robotics and Systems, 2022, 28 (12) : 1155 - 1160
  • [16] Inventory Pooling using Deep Reinforcement Learning
    Sampath, Kameshwaran
    Nishad, Sandeep
    Danda, Sai Koti Reddy
    Dayama, Pankaj
    Sankagiri, Suryanarayana
    2022 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING (IEEE SCC 2022), 2022, : 259 - 267
  • [17] Dynamic Positioning using Deep Reinforcement Learning
    Overeng, Simen Sem
    Nguyen, Dong Trong
    Hamre, Geir
    OCEAN ENGINEERING, 2021, 235
  • [18] Distributed Deep Reinforcement Learning using TensorFlow
    Rao, P. Ajay
    Kumar, Navaneesh B.
    Cadabam, Siddharth
    Praveena, T.
    2017 INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN COMPUTER, ELECTRICAL, ELECTRONICS AND COMMUNICATION (CTCEEC), 2017, : 171 - 174
  • [19] An Index Advisor Using Deep Reinforcement Learning
    Lan, Hai
    Bao, Zhifeng
    Peng, Yuwei
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 2105 - 2108
  • [20] Machining parameter optimization for a batch milling system using multi-task deep reinforcement learning
    Wang, Pei
    Cui, Yixin
    Tao, Haizhen
    Xu, Xun
    Yang, Sheng
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 78 : 124 - 152