Performance comparison of reinforcement learning and metaheuristics for factory layout planning

被引:14
作者
Klar, Matthias [1 ]
Glatt, Moritz [1 ]
Aurich, Jan C. [1 ]
机构
[1] RPTU Kaiserslautern, Inst Mfg Technol & Prod Syst FBK, POB 3049, D-67653 Kaiserslautern, Germany
关键词
Reinforcement learning; Factory layout planning; Facility layout problem; Machine learning; Optimization; FACILITY; OPTIMIZATION; GO;
D O I
10.1016/j.cirpj.2023.05.008
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Factory layout planning is a time-consuming process that has a large impact on the operational perfor-mance of a future factory. Besides, changing technologies and market requirements result in a frequent reconfiguration of the factory layout. Automated planning approaches can generate high-quality layout solutions and reduce the planning time compared to mere manual planning. Recent studies indicate that reinforcement learning is a suitable approach to support the early phase of the layout planning process. In this context, reinforcement learning shows potential performance-related advantages by learning the problem-related interdependencies compared to current metaheuristic approaches, which are commonly applied to the regarded problem. However, recent studies only consider a low number of reinforcement learning approaches and regarded application scenarios. In consequence, the performance in different problem sizes and of various existing reinforcement learning approaches has not been investigated. Besides, no comparison between reinforcement learning approaches and existing metaheuristics was performed for factory layout planning. As a consequence, the potential of reinforcement learning based factory layout panning can not be evaluated appropriately. Therefore, an encompassing comparison to metaheuristics is still an open research question. Regarding this background, the performance of 13 different reinforcement learning and 7 commonly used metaheuristics for three layout planning problems with different sizes is investigated in this paper. The approaches are applied to all three layout planning problems in order to compare their performance capabilities. The results indicate that the best-performing reinforcement learning approach is able to find similar or superior solutions compared to the best-performing meta-heuristics.& COPY; 2023 The Authors. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).
引用
收藏
页码:10 / 25
页数:16
相关论文
共 50 条
[21]   Route Planning and Power Management for PHEVs With Reinforcement Learning [J].
Zhang, Qian ;
Wu, Kui ;
Shi, Yang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (05) :4751-4762
[22]   Reinforcement learning for accelerated automatic treatment planning optimization [J].
Anjo, Eva ;
Rocha, Humberto ;
Dias, Joana .
RADIOTHERAPY AND ONCOLOGY, 2024, 194 :S4435-S4437
[23]   Connectivity conservation planning through deep reinforcement learning [J].
Equihua, Julian ;
Beckmann, Michael ;
Seppelt, Ralf .
METHODS IN ECOLOGY AND EVOLUTION, 2024, 15 (04) :779-790
[24]   Automatic Facility Layout Design Using Reinforcement Learning and a Analytic Hierarchy Process [J].
Ikeda H. ;
Nakagawa H. ;
Akagi H. ;
Sekimoto F. ;
Tsuchiya T. .
Journal of Japan Industrial Management Association, 2023, 74 (03) :142-152
[25]   Hot rolling planning based on deep reinforcement learning [J].
Wang, Jingliang ;
Sun, Yanguang ;
Gu, Jiachen ;
Chen, Jinxiang .
2024 5TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING, ICAICE, 2024, :895-902
[26]   H-MAS Architecture and Reinforcement Learning method for autonomous robot path planning [J].
Lamini, Chaymaa ;
Fathi, Youssef ;
Benhlima, Said .
2017 INTELLIGENT SYSTEMS AND COMPUTER VISION (ISCV), 2017,
[27]   Learning-to-Dispatch: Reinforcement Learning Based Flight Planning under Emergency [J].
Zhang, Kai ;
Yang, Yupeng ;
Xu, Chengtao ;
Liu, Dahai ;
Song, Houbing .
2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, :1821-1826
[28]   A Unifying Framework for Reinforcement Learning and Planning [J].
Moerland, Thomas M. ;
Broekens, Joost ;
Plaat, Aske ;
Jonker, Catholijn M. .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
[29]   Structure and Randomness in Planning and Reinforcement Learning [J].
Czechowski, Konrad ;
Januszewski, Piotr ;
Kozakowski, Piotr ;
Kucinski, Lukasz ;
Milos, Piotr .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[30]   Intelligent Path Planning of Underwater Robot Based on Reinforcement Learning [J].
Yang, Jiachen ;
Ni, Jingfei ;
Xi, Meng ;
Wen, Jiabao ;
Li, Yang .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2023, 20 (03) :1983-1996