Performance comparison of reinforcement learning and metaheuristics for factory layout planning

被引:14
作者
Klar, Matthias [1 ]
Glatt, Moritz [1 ]
Aurich, Jan C. [1 ]
机构
[1] RPTU Kaiserslautern, Inst Mfg Technol & Prod Syst FBK, POB 3049, D-67653 Kaiserslautern, Germany
关键词
Reinforcement learning; Factory layout planning; Facility layout problem; Machine learning; Optimization; FACILITY; OPTIMIZATION; GO;
D O I
10.1016/j.cirpj.2023.05.008
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Factory layout planning is a time-consuming process that has a large impact on the operational perfor-mance of a future factory. Besides, changing technologies and market requirements result in a frequent reconfiguration of the factory layout. Automated planning approaches can generate high-quality layout solutions and reduce the planning time compared to mere manual planning. Recent studies indicate that reinforcement learning is a suitable approach to support the early phase of the layout planning process. In this context, reinforcement learning shows potential performance-related advantages by learning the problem-related interdependencies compared to current metaheuristic approaches, which are commonly applied to the regarded problem. However, recent studies only consider a low number of reinforcement learning approaches and regarded application scenarios. In consequence, the performance in different problem sizes and of various existing reinforcement learning approaches has not been investigated. Besides, no comparison between reinforcement learning approaches and existing metaheuristics was performed for factory layout planning. As a consequence, the potential of reinforcement learning based factory layout panning can not be evaluated appropriately. Therefore, an encompassing comparison to metaheuristics is still an open research question. Regarding this background, the performance of 13 different reinforcement learning and 7 commonly used metaheuristics for three layout planning problems with different sizes is investigated in this paper. The approaches are applied to all three layout planning problems in order to compare their performance capabilities. The results indicate that the best-performing reinforcement learning approach is able to find similar or superior solutions compared to the best-performing meta-heuristics.& COPY; 2023 The Authors. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).
引用
收藏
页码:10 / 25
页数:16
相关论文
共 50 条
[41]   Reinforcement learning applied to production planning and control [J].
Esteso, Ana ;
Peidro, David ;
Mula, Josefa ;
Diaz-Madronero, Manuel .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2023, 61 (16) :5772-5789
[42]   Strategic Workforce Planning with Deep Reinforcement Learning [J].
Smit, Yannick ;
Den Hengst, Floris ;
Bhulai, Sandjai ;
Mehdad, Ehsan .
MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2022, PT II, 2023, 13811 :108-122
[43]   Optimizing Packet Forwarding Performance in Multiband Relay Networks via Customized Reinforcement Learning [J].
Mughal, Bushra ;
Fadlullah, Zubair Md ;
Fouda, Mostafa M. ;
Ikki, Salama .
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2022, 3 :973-985
[44]   A novel reinforcement learning framework for disassembly sequence planning using Q-learning technique optimized using an enhanced simulated annealing algorithm [J].
Chand, Mirothali ;
Ravi, Chandrasekar .
AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING, 2024, 38
[45]   Deep reinforcement learning applied to an assembly sequence planning problem with user preferences [J].
Neves, Miguel ;
Neto, Pedro .
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2022, 122 (11-12) :4235-4245
[46]   Trajectory Planning With Deep Reinforcement Learning in High-Level Action Spaces [J].
Williams, Kyle R. ;
Schlossman, Rachel ;
Whitten, Daniel ;
Ingram, Joe ;
Musuvathy, Srideep ;
Pagan, James ;
Williams, Kyle A. ;
Green, Sam ;
Patel, Anirudh ;
Mazumdar, Anirban ;
Parish, Julie .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (03) :2513-2529
[47]   Path Planning of Robotic Fish in Unknown Environment with Improved Reinforcement Learning Algorithm [J].
Hu, Jingbo ;
Mei, Jie ;
Chen, Dingfang ;
Li, Lijie ;
Cheng, Zhengshu .
INTERNET AND DISTRIBUTED COMPUTING SYSTEMS, 2018, 11226 :248-257
[48]   A Fast and Robust Algorithm with Reinforcement Learning for Large UAV Cluster Mission Planning [J].
Zuo, Lei ;
Gao, Shan ;
Li, Yachao ;
Li, Lianghai ;
Li, Ming ;
Lu, Xiaofei .
REMOTE SENSING, 2022, 14 (06)
[49]   Reinforcement Learning of Contact Preferability in Multi-Contact Locomotion Planning for Humanoids [J].
Kumagai, Iori ;
Murooka, Masaki ;
Morisawa, Mitsuharu ;
Kanehiro, Fumio .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (02) :1768-1775
[50]   Deep reinforcement learning for resilient microgrid expansion planning with multiple energy resource [J].
Pang, Kexin ;
Zhou, Jian ;
Tsianikas, Stamatis ;
Ma, Yizhong .
QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, 2024, 40 (01) :34-56