Multi-UAV Cooperative Pursuit of a Fast-Moving Target UAV Based on the GM-TD3 Algorithm

被引:0
作者
Zhang, Yaozhong [1 ]
Ding, Meiyan [1 ]
Yuan, Yao [1 ]
Zhang, Jiandong [1 ]
Yang, Qiming [1 ]
Shi, Guoqing [1 ]
Jiang, Frank [2 ]
Lu, Meiqu [3 ]
机构
[1] Northwestern Polytech Univ, Sch Elect & Informat, Xian 710072, Peoples R China
[2] Deakin Univ, Fac Sci Engn & Built Environm, Melbourne 3125, Australia
[3] Guangxi Minzu Univ, Sch Artificial Intelligence, Nanning 530006, Peoples R China
关键词
UAV pursuit game; TD3; genetic algorithm; maximum mean discrepancy; evolutionary reinforcement learning;
D O I
10.3390/drones8100557
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Recently, developing multi-UAVs to cooperatively pursue a fast-moving target has become a research hotspot in the current world. Although deep reinforcement learning (DRL) has made a lot of achievements in the UAV pursuit game, there are still some problems such as high-dimensional parameter space, the ease of falling into local optimization, the long training time, and the low task success rate. To solve the above-mentioned issues, we propose an improved twin delayed deep deterministic policy gradient algorithm combining the genetic algorithm and maximum mean discrepancy method (GM-TD3) for multi-UAV cooperative pursuit of high-speed targets. Firstly, this paper combines GA-based evolutionary strategies with TD3 to generate action networks. Then, in order to avoid local optimization in the algorithm training process, the maximum mean difference (MMD) method is used to increase the diversity of the policy population in the updating process of the population parameters. Finally, by setting the sensitivity weights of the genetic memory buffer of UAV individuals, the mutation operator is improved to enhance the stability of the algorithm. In addition, this paper designs a hybrid reward function to accelerate the convergence speed of training. Through simulation experiments, we have verified that the training efficiency of the improved algorithm has been greatly improved, which can achieve faster convergence; the successful rate of the task has reached 95%, and further validated UAVs can better cooperate to complete the pursuit game task.
引用
收藏
页数:24
相关论文
共 26 条
  • [1] Bi Wenhao, 2024, Systems Engineering and Electronics, V46, P922, DOI 10.12305/j.issn.1001-506X.2024.03.18
  • [2] Twin-Delayed DDPG: A Deep Reinforcement Learning Technique to Model a Continuous Movement of an Intelligent Robot Agent
    Dankwa, Stephen
    Zheng, Wenfeng
    [J]. ICVISP 2019: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON VISION, IMAGE AND SIGNAL PROCESSING, 2019,
  • [3] Collaborative multi-agent reinforcement learning based on experience propagation
    Fang, Min
    Groen, Frans C. A.
    [J]. JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2013, 24 (04) : 683 - 689
  • [4] [傅莉 Fu Li], 2012, [兵工学报, Acta Armamentarii], V33, P1210
  • [5] [宫远强 Gong Yuanqiang], 2023, [兵工学报, Acta Armamentarii], V44, P2661
  • [6] The dynamics of reinforcement social learning in networked cooperative multiagent systems
    Hao, Jianye
    Huang, Dongping
    Cai, Yi
    Leung, Ho-fung
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2017, 58 : 111 - 122
  • [7] He FH, 2010, AEROSP CONF PROC
  • [8] Huang H.Q., 2019, Navigation and Control., V18, P10
  • [9] Jiang F.Q., 2023, Informatiz. Res, V49, P36
  • [10] Khadka S, 2018, ADV NEUR IN, V31