Transferable multi-objective factory layout planning using simulation-based deep reinforcement learning

被引:0
|
作者
Klar, Matthias [1 ,3 ]
Schworm, Philipp [1 ]
Wu, Xiangqian [1 ]
Simon, Peter [1 ]
Glatt, Moritz [1 ]
Ravani, Bahram [2 ]
Aurich, Jan C. [1 ]
机构
[1] RPTU Kaiserslautern, Inst Mfg Technol & Prod Syst, Kaiserslautern, Germany
[2] Univ Calif Davis, Dept Mech & Aerosp Engn, Davis, CA USA
[3] POB 3049, D-67653 Kaiserslautern, Germany
关键词
Facility layout problem; Reinforcement learning; Multi -objective optimization; Discrete event simulation; Material flow; GENETIC ALGORITHM; DESIGN; OPTIMIZATION; SEARCH;
D O I
10.1016/j.jmsy.2024.04.007
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Factory layout planning aims at finding an optimized layout configuration under consideration of varying influences such as the material flow characteristics. Manual layout planning can be characterized as a complex decision-making process due to a large number of possible placement options. Automated planning approaches aim at reducing the manual planning effort by generating optimized layout variants in the early stages of layout planning. Recent developments have introduced deep Reinforcement Learning (RL) based planning approaches that allow to optimize a layout under consideration of a single optimization criterion. However, within layout planning, multiple partially conflicting planning objectives have to be considered. Such multiple objectives are not considered by existing RL-based approaches. This paper addresses this research gap by presenting a novel deep RL-based layout planning approach that allows consideration of multiple objectives for optimization. Furthermore, existing RL-based planning approaches only consider analytically formulated objectives such as the transportation distance. Consequently, dynamic influences in the material flow are neglected which can result in higher operational costs of the future factory. To address this issue, a discrete event simulation module is developed that allows simulating manufacturing and material flow processes simultaneously for any layout configuration generated by the RL approach. Consequently, the presented approach considers material flow simulation results for multi-objective optimization. To investigate the capabilities of RL-based factory layout planning, different RL architectures are compared based on a simplified application scenario. Throughput time, media supply, and material flow clarity are considered as optimization objectives. The best performing architecture is then applied to an exemplary application scenario and compared with the results obtained by a combined version of the genetic algorithm and tabu search, the non-dominated sorting genetic algorithm, and the optimal solution. Finally, two industrial planning scenarios, one focusing on brownfield and one on greenfield planning, are considered. The results show that the performance of RL compared to meta-heuristics depends on the considered computation time. With time the results generated by the RL approach exceed the quality of the best conventional solution by up to 11%. Finally, the potential of applying transfer learning is investigated for three different application scenarios. It is observed that RL can learn generalized patterns for factory layout planning, which allows to significantly reduce the required training time and can lead to improved solution quality. Thus, the use of pre-trained RL models shows a substantial performance potential for automated factory layout planning which cannot be achieved with conventional automated planning approaches.
引用
收藏
页码:487 / 511
页数:25
相关论文
共 50 条
  • [41] Track Learning Agent Using Multi-objective Reinforcement Learning
    Shah, Rushabh
    Ruparel, Vidhi
    Prabhu, Mukul
    D'mello, Lynette
    FOURTH CONGRESS ON INTELLIGENT SYSTEMS, VOL 1, CIS 2023, 2024, 868 : 27 - 40
  • [42] Explainable generative design in manufacturing for reinforcement learning based factory layout planning
    Klar, Matthias
    Ruediger, Patrick
    Schuermann, Maik
    Goeren, Goren Tobias
    Glatt, Moritz
    Ravani, Bahram
    Aurich, Jan C.
    JOURNAL OF MANUFACTURING SYSTEMS, 2024, 72 : 74 - 92
  • [43] Optimization of the Factory Layout and Production Flow Using Production-Simulation-Based Reinforcement Learning
    Choi, Hyekyung
    Yu, Seokhwan
    Lee, Donghyun
    Noh, Sang Do
    Ji, Sanghoon
    Kim, Horim
    Yoon, Hyunsik
    Kwon, Minsu
    Han, Jagyu
    MACHINES, 2024, 12 (06)
  • [44] Multi-Objective Reinforcement Learning Based Healthcare Expansion Planning Considering Pandemic Events
    Shuvo, Salman Sadiq
    Symum, Hasan
    Ahmed, Md Rubel
    Yilmaz, Yasin
    Zayas-Castro, Jose L. L.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (06) : 2760 - 2770
  • [45] Multi-Objective Service Composition Using Reinforcement Learning
    Moustafa, Ahmed
    Zhang, Minjie
    SERVICE-ORIENTED COMPUTING, ICSOC 2013, 2013, 8274 : 298 - 312
  • [46] Simulation-based multi-objective muffler optimization using efficient global optimization
    Puthuparampil, Jobin
    Sullivan, Pierre
    NOISE CONTROL ENGINEERING JOURNAL, 2020, 68 (06) : 441 - 458
  • [47] Model-Based Multi-Objective Reinforcement Learning
    Wiering, Marco A.
    Withagen, Maikel
    Drugan, Madalina M.
    2014 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2014, : 111 - 116
  • [48] Hypervolume-Based Multi-Objective Reinforcement Learning
    Van Moffaert, Kristof
    Drugan, Madalina M.
    Nowe, Ann
    EVOLUTIONARY MULTI-CRITERION OPTIMIZATION, EMO 2013, 2013, 7811 : 352 - 366
  • [49] A storage expansion planning framework using reinforcement learning and simulation-based optimization
    Tsianikas, Stamatis
    Yousefi, Nooshin
    Zhou, Jian
    Rodgers, Mark D.
    Coit, David
    APPLIED ENERGY, 2021, 290
  • [50] Multi-objective recognition based on deep learning
    Liu, Xin
    Wu, Junhui
    Man, Yiyun
    Xu, Xibao
    Guo, Jifeng
    AIRCRAFT ENGINEERING AND AEROSPACE TECHNOLOGY, 2020, 92 (08): : 1185 - 1193