A Deep Reinforcement Learning Method for Collision Avoidance with Dense Speed-Constrained Multi-UAV

被引:0
|
作者
Han, Jiale [1 ]
Zhu, Yi [1 ]
Yang, Jian [1 ]
机构
[1] South China Univ Technol, Sch Automat Sci & Engn, Guangzhou 510640, Peoples R China
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2025年 / 10卷 / 03期
基金
中国国家自然科学基金;
关键词
Collision avoidance; Autonomous aerial vehicles; Feature extraction; Safety; Recurrent neural networks; Deep reinforcement learning; Vectors; Turning; Training; Predictive models; reinforcement learning; autonomous aerial vehicles; soft actor-critic;
D O I
10.1109/LRA.2025.3527292
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter introduces a novel deep reinforcement learning (DRL) method for collision avoidance problem of fixed-wing unmanned aerial vehicles (UAVs). First, with considering the characteristics of collision avoidance problem, a collision prediction method is proposed to identify the neighboring UAVs with a significant threat. A convolutional neural network model is devised to extract the dynamic environment features. Second, a trajectory tracking macro action is incorporated into the action space of the proposed DRL-based algorithm. Guided by the reward function that considers to reward for closing to the preset flight paths, UAVs could return to the preset flight path after completing the collision avoidance. The proposed method is trained in simulation scenarios, with model updates implemented using a soft actor-critic (SAC) algorithm. Validation experiments are conducted in several complex multi-UAV flight environments. The results demonstrate the superiority of our method over other advanced methods.
引用
收藏
页码:2152 / 2159
页数:8
相关论文
共 50 条
  • [41] Cooperative Multi-UAV Collision Avoidance Based on Distributed Dynamic Optimization and Causal Analysis
    Lao, Mingrui
    Tang, Jun
    APPLIED SCIENCES-BASEL, 2017, 7 (01):
  • [42] Dynamic Attention Network for Multi-UAV Reinforcement Learning
    Xu, Dongsheng
    Wu, Shang
    INTERNATIONAL CONFERENCE ON ALGORITHMS, HIGH PERFORMANCE COMPUTING, AND ARTIFICIAL INTELLIGENCE (AHPCAI 2021), 2021, 12156
  • [43] Multi-UAV Collaborative Detection Based on Reinforcement Learning
    Hao, Yuanhui
    Guo, Chubing
    Ke, Liangjun
    ADVANCES IN SWARM INTELLIGENCE, PT I, ICSI 2024, 2024, 14788 : 463 - 474
  • [44] DEEP REINFORCEMENT LEARNING FOR SHIP COLLISION AVOIDANCE AND PATH TRACKING
    Singht, Amar Nath
    Vijayakumar, Akash
    Balasubramaniyam, Shankruth
    Somayajula, Abhilash
    PROCEEDINGS OF ASME 2024 43RD INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2024, VOL 5B, 2024,
  • [45] Reinforcement-Learning-Assisted Multi-UAV Task Allocation and Path Planning for IIoT
    Zhao, Guodong
    Wang, Ye
    Mu, Tong
    Meng, Zhijun
    Wang, Zichen
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (16): : 26766 - 26777
  • [46] Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning
    Song, Sirui
    Saunders, Kirk
    Yue, Ye
    Liu, Jundong
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 914 - 919
  • [47] Multi-UAV reconnaissance mission planning via deep reinforcement learning with simulated annealing
    Fan, Mingfeng
    Liu, Huan
    Wu, Guohua
    Gunawan, Aldy
    Sartoretti, Guillaume
    SWARM AND EVOLUTIONARY COMPUTATION, 2025, 93
  • [48] Scalable and Cooperative Deep Reinforcement Learning Approaches for Multi-UAV Systems: A Systematic Review
    Frattolillo, Francesco
    Brunori, Damiano
    Iocchi, Luca
    DRONES, 2023, 7 (04)
  • [49] Multi-UAV Speed Control with Collision Avoidance and Handover-aware Cell Association: DRL with Action Branching
    Yan, Zijiang
    Jaafar, Wael
    Selim, Bassant
    Tabassum, Hina
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5067 - 5072
  • [50] Deep Reinforcement Learning Based Computation Offloading and Trajectory Planning for Multi-UAV Cooperative Target Search
    Luo, Quyuan
    Luan, Tom H.
    Shi, Weisong
    Fan, Pingzhi
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (02) : 504 - 520