Automated guided vehicle dispatching and routing integration via digital twin with deep reinforcement learning

被引:20
作者
Zhang, Lixiang [1 ]
Yang, Chen [2 ]
Yan, Yan [1 ]
Cai, Ze [1 ]
Hu, Yaoguang [1 ]
机构
[1] Beijing Inst Technol, Lab Ind & Intelligent Syst Engn, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Dispatching; Routing; Digital twin; Reinforcement learning; Automated guided vehicle; INDUSTRY; 4.0; ALGORITHM;
D O I
10.1016/j.jmsy.2023.12.008
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The manufacturing industry has witnessed a significant shift towards high flexibility and adaptability, driven by personalized demands. However, automated guided vehicle (AGV) dispatching optimization is still challenging when considering AGV routing with the spatial -temporal and kinematics constraints in intelligent production logistics systems, limiting the evolving industry applications. Against this backdrop, this paper presents a digital twin (DT) -enhanced deep reinforcement learning -based optimization framework to integrate AGV dispatching and routing at both horizontal and vertical levels. First, the proposed framework leverages a digital twin model of the shop floor to provide a simulation environment that closely mimics the actual manufacturing process, enabling the AGV dispatching agent to be trained in a realistic setting, thus reducing the risk of finding unrealistic solutions under specific shop-floor settings and preventing time-consuming trial -and -error processes. Then, the AGV dispatching with the routing problem is modeled as a Markov Decision Process to optimize tardiness and energy consumption. An improved dueling double deep Q network algorithm with count -based exploration is developed to learn a better -dispatching policy by interacting with the high-fidelity DT model that integrates a static path planning agent using A* and a dynamic collision avoidance agent using a deep deterministic policy gradient to prevent the congestion and deadlock. Experimental results show that our method outperforms four state-of-the-art methods with shorter tardiness, lower energy consumption, and better stability. The proposed method provides significant potential to utilize the digital twin and reinforcement learning in the decision -making and optimization of manufacturing processes.
引用
收藏
页码:492 / 503
页数:12
相关论文
共 41 条
[1]  
Cai Ze, 2023, Computer Integrated Manufacturing Systems, P236, DOI 10.13196/j.cims.2023.01.020
[2]   Decentralized Motion Planning and Scheduling of AGVs in an FMS [J].
Demesure, Guillaume ;
Defoort, Michael ;
Bekrar, Abdelghani ;
Trentesaux, Damien ;
Djemai, Mohamed .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (04) :1744-1752
[3]   CHARACTERIZATION OF AUTOMATIC GUIDED VEHICLE DISPATCHING RULES [J].
EGBELU, PJ ;
TANCHOCO, JMA .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 1984, 22 (03) :359-374
[4]   Digital-Twin-Based Job Shop Scheduling Toward Smart Manufacturing [J].
Fang, Yilin ;
Peng, Chao ;
Lou, Ping ;
Zhou, Zude ;
Hu, Jianmin ;
Yan, Junwei .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (12) :6425-6435
[5]   Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda [J].
Fragapane, Giuseppe ;
de Koster, Rene ;
Sgarbossa, Fabio ;
Strandhagen, Jan Ola .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2021, 294 (02) :405-426
[6]  
Grieves M.W., 2014, White Paper, V1, P1, DOI DOI 10.5281/ZENODO.1493930
[7]   Synchronization of Shop-Floor Logistics and Manufacturing Under IIoT and Digital Twin-Enabled Graduation Intelligent Manufacturing System [J].
Guo, Daqiang ;
Zhong, Ray Y. ;
Rong, Yiming ;
Huang, George G. Q. .
IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (03) :2005-2016
[8]   Industry 4.0 and the current status as well as future prospects on logistics [J].
Hofmann, Erik ;
Ruesch, Marco .
COMPUTERS IN INDUSTRY, 2017, 89 :23-34
[9]   Deep reinforcement learning based AGVs real-time scheduling with mixed rule for flexible shop floor in industry 4.0 [J].
Hu, Hao ;
Jia, Xiaoliang ;
He, Qixuan ;
Fu, Shifeng ;
Liu, Kuo .
COMPUTERS & INDUSTRIAL ENGINEERING, 2020, 149
[10]   Designing an adaptive production control system using reinforcement learning [J].
Kuhnle, Andreas ;
Kaiser, Jan-Philipp ;
Theiss, Felix ;
Stricker, Nicole ;
Lanza, Gisela .
JOURNAL OF INTELLIGENT MANUFACTURING, 2021, 32 (03) :855-876