AGV path planning and task scheduling based on improved proximal policy optimization algorithm

被引:0
作者
Qi, Xuan [1 ]
Zhou, Tong [2 ]
Wang, Cunsong [2 ]
Peng, Xiaotian [1 ]
Peng, Hao [1 ]
机构
[1] School of Mechanical and Power Engineering, Nanjing Tech University, Nanjing
[2] Institute of Intelligent Manufacturing, Nanjing Tech University, Nanjing
来源
Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS | 2025年 / 31卷 / 03期
基金
中国国家自然科学基金;
关键词
automated guided vehicle; path planning; proximal policy optimization algorithm; reinforcement learning; task scheduling;
D O I
10.13196/j.cims.2023.0552
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated Guided Vehicle(AGV)is a type of automated material handling equipment with high flexibility and adaptability.The current research on optimal path and scheduling algorithms for AGVs still faces problems such as poor generalization,low convergence efficiency,and long routing time.Therefore,an improved Proximal Policy Optimization(PPO)algorithm was proposed.By adapting a multi-step action selection strategy to increase the step length of AGV movement,the AGV action set was expanded from the original 4 directions by 8 directions for optimizing the optimal path.The dynamic reward function was improved to adjust the reward value in real time based on the current state of AGV for enhancing its learning ability.Then,the reward value curves were compared based on different improvement methods to validate the convergence efficiency of the algorithm and the distance of the optimal path.Finally,by employing a continuous task scheduling optimization algorithm,a novel single AGV continuous task scheduling optimization algorithm had been developed to enhance transportation efficiency.The results showed that the improved algorithm shortened the optimal path by 28.6% and demonstrated a 78.5% increase in convergence efficiency compared to the PPO algorithm.It outperformed in handling more complex tasks that require high-level policies and exhibits stronger generalization capabilities.Compared to Q-Learning,Deep Q-Network(DQN)algorithm and Soft Actor Critical(SAC)algorithm,the improved algorithm showed efficiency improvements of 84.4%,83.7%,and 77.9% respectively.After the optimization of continuous task scheduling for a single AGV,the average path was reduced by 47.6%. © 2025 CIMS. All rights reserved.
引用
收藏
页码:955 / 964
页数:9
相关论文
共 19 条
[1]  
NIU H Y., WU W M., XING Z, CWANG X K, Et al., A novel multi-tasks chain scheduling algorithm based on capacity prediction to solve AGV dispatching problem in an intelligent manufacturing system[J], Journal of Manufacturing Systems, 68, pp. 130-144, (2023)
[2]  
MOUSAVI M, YAP H J, MUSA S N, Et al., A fuzzy hybrid GA-PSO algorithm for multi-objective AGV scheduling in FMS [J], International Journal of Simulation Modeling, 16, 1, pp. 58-71, (2017)
[3]  
CHANG Junlin, SHAO Huihe, Heuristic algorithm for two-machine no-wait flowshop scheduling problem [J 3, Computer Integrated Manufacturing Systems, 11, 8, pp. 1147-1153, (2005)
[4]  
SURENDRA K G, DEVESH P S, Et al., A theoretical graph based framework for parameter tuning of multi-core systems, International Journal of Wireless and Microwave Technologies (IJWMT), 12, 4, pp. 15-25, (2022)
[5]  
Qian MEI, DONG Baoli, Multi AGV task allocation based on hybrid ant colony genetic algorithm[J], Logistics Engineering and Management, 44, 8, pp. l-5, (2022)
[6]  
SUN Y H., FANG M, SU Y X., AGV path planning based on Improved Dijkstra Algorithm[J ], Journal of Physics
[7]  
Conference Series, 1746, 1, (2021)
[8]  
CHEN Yifan, AGV trajectory tracking system based on adaptive fuzzy control[j], Automotive Applied Technology, 46, 2, pp. 22-24, (2021)
[9]  
SUTTON R S., Reinforcement learning: An introduction, (2018)
[10]  
BA1 Y F, DING X F, HU D S, Et al., Research on dynamic path planning of multi- AGVs based on reinforcement learning [J], Applied Sciences, 12, 16, (2022)