Multi-UAV Path Planning and Following Based on Multi-Agent Reinforcement Learning

被引:14
作者
Zhao, Xiaoru [1 ]
Yang, Rennong [1 ]
Zhong, Liangsheng [2 ]
Hou, Zhiwei [2 ]
机构
[1] Air Force Engn Univ, Air Traff Control & Nav Sch, Xian 710051, Peoples R China
[2] Sun Yat Sen Univ, Sch Syst Sci & Engn, Guangzhou 510275, Peoples R China
关键词
path planning; path follow; deep reinforcement learning; multi-UAV; parameter share;
D O I
10.3390/drones8010018
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Dedicated to meeting the growing demand for multi-agent collaboration in complex scenarios, this paper introduces a parameter-sharing off-policy multi-agent path planning and the following approach. Current multi-agent path planning predominantly relies on grid-based maps, whereas our proposed approach utilizes laser scan data as input, providing a closer simulation of real-world applications. In this approach, the unmanned aerial vehicle (UAV) uses the soft actor-critic (SAC) algorithm as a planner and trains its policy to converge. This policy enables end-to-end processing of laser scan data, guiding the UAV to avoid obstacles and reach the goal. At the same time, the planner incorporates paths generated by a sampling-based method as following points. The following points are continuously updated as the UAV progresses. Multi-UAV path planning tasks are facilitated, and policy convergence is accelerated through sharing experiences among agents. To address the challenge of UAVs that are initially stationary and overly cautious near the goal, a reward function is designed to encourage UAV movement. Additionally, a multi-UAV simulation environment is established to simulate real-world UAV scenarios to support training and validation of the proposed approach. The simulation results highlight the effectiveness of the presented approach in both the training process and task performance. The presented algorithm achieves an 80% success rate to guarantee that three UAVs reach the goal points.
引用
收藏
页数:18
相关论文
共 50 条
[1]  
Al-Kaff A., 2019, 2019 IEEE INT C VEH, P1, DOI DOI 10.1109/icves.2019.8906381
[2]   Sensors and Measurements for UAV Safety: An Overview [J].
Balestrieri, Eulalia ;
Daponte, Pasquale ;
De Vito, Luca ;
Picariello, Francesco ;
Tudosa, Ioan .
SENSORS, 2021, 21 (24)
[3]  
Bennewitz M, 2001, IEEE INT CONF ROBOT, P271, DOI 10.1109/ROBOT.2001.932565
[4]   Multi-Agent Reinforcement Learning: A Review of Challenges and Applications [J].
Canese, Lorenzo ;
Cardarilli, Gian Carlo ;
Di Nunzio, Luca ;
Fazzolari, Rocco ;
Giardino, Daniele ;
Re, Marco ;
Spano, Sergio .
APPLIED SCIENCES-BASEL, 2021, 11 (11)
[5]   Multi-strategy fusion differential evolution algorithm for UAV path planning in complex environment [J].
Chai, Xuzhao ;
Zheng, Zhishuai ;
Xiao, Junming ;
Yan, Li ;
Qu, Boyang ;
Wen, Pengwei ;
Wang, Haoyu ;
Zhou, You ;
Sun, Hang .
AEROSPACE SCIENCE AND TECHNOLOGY, 2022, 121
[6]  
Chen TY, 2018, C IND ELECT APPL, P1510, DOI 10.1109/ICIEA.2018.8397948
[7]   Integrated Task Assignment and Path Planning for Capacitated Multi-Agent Pickup and Delivery [J].
Chen, Zhe ;
Alonso-Mora, Javier ;
Bai, Xiaoshan ;
Harabor, Daniel D. ;
Stuckey, Peter J. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :5816-5823
[8]  
Christianos F., 2020, P 34 INT C NEUR INF
[9]   Decentralized Multi-Agent Pursuit Using Deep Reinforcement Learning [J].
de Souza, Cristino, Jr. ;
Newbury, Rhys ;
Cosgun, Akansel ;
Castillo, Pedro ;
Vidolov, Boris ;
Kulic, Dana .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :4552-4559
[10]  
Desaraju VR, 2011, IEEE INT CONF ROBOT