Distributed Unmanned Aerial Vehicle Cluster Testing Method Based on Deep Reinforcement Learning

被引:0
|
作者
Li, Dong [1 ]
Yang, Panfei [1 ]
机构
[1] China Elect Prod Reliabil & Environm Testing Res I, Software & Syst Res Inst, Guangzhou 511370, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 23期
关键词
Unmanned Aerial Vehicle; communication test; deep reinforcement learning; Deep Deterministic Policy Gradient; task collaborative execution;
D O I
10.3390/app142311282
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In the process of the collaborative work of Unmanned Aerial Vehicle (UAV) clusters, the cluster communication node test is often carried out by a single-node test, which leads to poor topology and robustness of the overall network system, an imbalanced communication network load, and high complexity of the communication test, which seriously affects the diversified needs of current users and the efficiency of large-scale task processing. To solve this problem, a distributed method for UAV cluster testing, called UTDR (distributed UAV cluster Testing method by using Deep Reinforcement learning), based on the Deep Deterministic Policy Gradient (DDPG) is proposed in this work. The system management node is used to monitor the status of the UAV testing task execution node and bandwidth resources. By taking advantage of the method of continuous interaction between the agent and the environment, the future state of the node after processing the current task to be assigned is predicted and evaluated from the perspective of interpretability, so as to achieve the effectiveness and stability of the UAV cluster testing task collaborative execution. The experimental results show that our proposed method can ensure the stable operation of the UAV cluster, accurately predict the future state, and reduce the load degree and bandwidth resource consumption of the large-scale test task network system.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Fuzzing for Unmanned Aerial Vehicle System Based on Reinforcement Learning
    Yu, Zhenhua
    Yang, Wenjian
    Li, Xiteng
    Cong, Xuya
    Computer Engineering and Applications, 2024, 60 (21) : 89 - 98
  • [2] Research on the unmanned aerial vehicle image recognition method based on deep learning
    Wei, Guoli
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 120 - 121
  • [3] Deep Learning Based Unmanned Aerial Vehicle Landcover Image Segmentation Method
    Liu W.
    Zhao L.
    Zhou Y.
    Zong S.
    Luo Y.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (02): : 221 - 229
  • [4] A Reinforcement Learning Method Based on an Improved Sampling Mechanism for Unmanned Aerial Vehicle Penetration
    Wang, Yue
    Li, Kexv
    Zhuang, Xing
    Liu, Xinyu
    Li, Hanyu
    AEROSPACE, 2023, 10 (07)
  • [5] Unmanned Aerial Vehicle Path Planning in Complex Dynamic Environments Based on Deep Reinforcement Learning
    Liu, Jiandong
    Luo, Wei
    Zhang, Guoqing
    Li, Ruihao
    MACHINES, 2025, 13 (02)
  • [6] Task Offloading Strategy for Unmanned Aerial Vehicle Power Inspection Based on Deep Reinforcement Learning
    Zhuang, Wei
    Xing, Fanan
    Lu, Yuhang
    SENSORS, 2024, 24 (07)
  • [7] Trusted Geographic Routing Protocol Based on Deep Reinforcement Learning for Unmanned Aerial Vehicle Network
    Zhang Yanan
    Qiu Hongbing
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (12) : 4211 - 4217
  • [8] Unmanned Aerial Vehicle Cluster Distributed Combat Effectiveness Evaluation Based on DoDAF
    Liu, Jun-yuan
    Sun, Li-na
    Zhao, Ze-mao
    PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 2264 - 2272
  • [9] Cooperatively pursuing a target unmanned aerial vehicle by multiple unmanned aerial vehicles based on multiagent reinforcement learning
    Wang X.
    Xuan S.
    Ke L.
    Advanced Control for Applications: Engineering and Industrial Systems, 2020, 2 (02):
  • [10] Autonomous control of unmanned aerial vehicle for chemical detection using deep reinforcement learning
    Byun, Hyung Joon
    Nam, Hyunwoo
    ELECTRONICS LETTERS, 2022, 58 (11) : 423 - 425