Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning

被引:31
作者
Na, Seongin [1 ]
Niu, Hanlin [1 ]
Lennox, Barry [1 ]
Arvin, Farshad [1 ]
机构
[1] Univ Manchester, Sch Engn, Dept Elect & Elect Engn, Swarm & Computat Intelligence Lab SwaCIL, Manchester M13 9PL, Lancs, England
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
Collision avoidance; Autonomous vehicles; Communication networks; Training; Task analysis; Robot sensing systems; Servers; autonomous vehicles; multi-agent systems; deep reinforcement learning; swarm robotics; MEMORY; ANT;
D O I
10.1109/TVT.2022.3145346
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Autonomous vehicles have been highlighted as a major growth area for future transportation systems and the deployment of large numbers of these vehicles is expected when safety and legal challenges are overcome. To meet the necessary safety standards, effective collision avoidance technologies are required to ensure that the number of accidents are kept to a minimum. As large numbers of autonomous vehicles, operating together on roads, can be regarded as a swarm system, we propose a bio-inspired collision avoidance strategy using virtual pheromones; an approach that has evolved effectively in nature over many millions of years. Previous research using virtual pheromones showed the potential of pheromone-based systems to maneuver a swarm of robots. However, designing an individual controller to maximise the performance of the entire swarm is a major challenge. In this paper, we propose a novel deep reinforcement learning (DRL) based approach that is able to train a controller that introduces collision avoidance behaviour. To accelerate training, we propose a novel sampling strategy called Highlight Experience Replay and integrate it with a Deep Deterministic Policy Gradient algorithm with noise added to the weights and biases of the artificial neural network to improve exploration. To evaluate the performance of the proposed DRL-based controller, we applied it to navigation and collision avoidance tasks in three different traffic scenarios. The experimental results showed that the proposed DRL-based controller outperformed the manually-tuned controller in terms of stability, effectiveness, robustness and ease of tuning process. Furthermore, the proposed Highlight Experience Replay method outperformed than the popular Prioritized Experience Replay sampling strategy by taking 27% of training time average over three stages.
引用
收藏
页码:2511 / 2526
页数:16
相关论文
共 50 条
[31]   Collision-avoidance under COLREGS for unmanned surface vehicles via deep reinforcement learning [J].
Ma, Yong ;
Zhao, Yujiao ;
Wang, Yulong ;
Gan, Langxiong ;
Zheng, Yuanzhou .
MARITIME POLICY & MANAGEMENT, 2020, 47 (05) :665-686
[32]   A Deep Reinforcement Learning Method for Collision Avoidance with Dense Speed-Constrained Multi-UAV [J].
Han, Jiale ;
Zhu, Yi ;
Yang, Jian .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03) :2152-2159
[33]   A COLREGs-Compliant Collision Avoidance Decision Approach Based on Deep Reinforcement Learning [J].
Wang, Weiqiang ;
Huang, Liwen ;
Liu, Kezhong ;
Wu, Xiaolie ;
Wang, Jingyao .
JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2022, 10 (07)
[34]   Deep reinforcement learning for autonomous vehicles: lane keep and overtaking scenarios with collision avoidance [J].
Ashwin S.H. ;
Naveen Raj R. .
International Journal of Information Technology, 2023, 15 (7) :3541-3553
[35]   Autonomous vehicle extreme control for emergency collision avoidance via Reachability-Guided reinforcement learning [J].
Zhao, Shiyue ;
Zhang, Junzhi ;
He, Chengkun ;
Ji, Yuan ;
Huang, Heye ;
Hou, Xiaohui .
ADVANCED ENGINEERING INFORMATICS, 2024, 62
[36]   Deep reinforcement learning based collision avoidance system for autonomous ships [J].
Wang, Yong ;
Xu, Haixiang ;
Feng, Hui ;
He, Jianhua ;
Yang, Haojie ;
Li, Fen ;
Yang, Zhen .
OCEAN ENGINEERING, 2024, 292
[37]   Deep reinforcement learning-based collision avoidance for an autonomous ship [J].
Chun, Do-Hyun ;
Roh, Myung-Il ;
Lee, Hye-Won ;
Ha, Jisang ;
Yu, Donghun .
OCEAN ENGINEERING, 2021, 234
[38]   Aircraft collision avoidance modeling and optimization using deep reinforcement learning [J].
Park K.-W. ;
Kim J.-H. .
Journal of Institute of Control, Robotics and Systems, 2021, 27 (09) :652-659
[39]   Vision-guided Collision Avoidance through Deep Reinforcement Learning [J].
Song, Sirui ;
Zhang, Yuanhang ;
Qin, Xi ;
Saunders, Kirk ;
Liu, Jundong .
PROCEEDINGS OF THE 2021 IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE (NAECON), 2021, :191-194
[40]   A Bio-Inspired Cybersecurity Scheme to Protect a Swarm of Robots [J].
Hernandez-Herrera, Alejandro ;
Rubio Espino, Elsa ;
Escamilla Ambrosio, Ponciano Jorge .
ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2018, PT II, 2018, 11289 :318-331