Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning

被引:31
作者
Na, Seongin [1 ]
Niu, Hanlin [1 ]
Lennox, Barry [1 ]
Arvin, Farshad [1 ]
机构
[1] Univ Manchester, Sch Engn, Dept Elect & Elect Engn, Swarm & Computat Intelligence Lab SwaCIL, Manchester M13 9PL, Lancs, England
基金
英国工程与自然科学研究理事会; 欧盟地平线“2020”;
关键词
Collision avoidance; Autonomous vehicles; Communication networks; Training; Task analysis; Robot sensing systems; Servers; autonomous vehicles; multi-agent systems; deep reinforcement learning; swarm robotics; MEMORY; ANT;
D O I
10.1109/TVT.2022.3145346
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Autonomous vehicles have been highlighted as a major growth area for future transportation systems and the deployment of large numbers of these vehicles is expected when safety and legal challenges are overcome. To meet the necessary safety standards, effective collision avoidance technologies are required to ensure that the number of accidents are kept to a minimum. As large numbers of autonomous vehicles, operating together on roads, can be regarded as a swarm system, we propose a bio-inspired collision avoidance strategy using virtual pheromones; an approach that has evolved effectively in nature over many millions of years. Previous research using virtual pheromones showed the potential of pheromone-based systems to maneuver a swarm of robots. However, designing an individual controller to maximise the performance of the entire swarm is a major challenge. In this paper, we propose a novel deep reinforcement learning (DRL) based approach that is able to train a controller that introduces collision avoidance behaviour. To accelerate training, we propose a novel sampling strategy called Highlight Experience Replay and integrate it with a Deep Deterministic Policy Gradient algorithm with noise added to the weights and biases of the artificial neural network to improve exploration. To evaluate the performance of the proposed DRL-based controller, we applied it to navigation and collision avoidance tasks in three different traffic scenarios. The experimental results showed that the proposed DRL-based controller outperformed the manually-tuned controller in terms of stability, effectiveness, robustness and ease of tuning process. Furthermore, the proposed Highlight Experience Replay method outperformed than the popular Prioritized Experience Replay sampling strategy by taking 27% of training time average over three stages.
引用
收藏
页码:2511 / 2526
页数:16
相关论文
共 50 条
[41]   A Bio-Inspired Cybersecurity Scheme to Protect a Swarm of Robots [J].
Hernandez-Herrera, Alejandro ;
Rubio Espino, Elsa ;
Escamilla Ambrosio, Ponciano Jorge .
ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2018, PT II, 2018, 11289 :318-331
[42]   Queue Formation and Obstacle Avoidance Navigation Strategy for Multi-Robot Systems Based on Deep Reinforcement Learning [J].
Gao, Tianyi ;
Li, Zhanlan ;
Xiong, Zhixin ;
Wen, Ling ;
Tian, Kai ;
Cai, Kewei .
IEEE ACCESS, 2025, 13 :14083-14100
[43]   Learning-Based Navigation and Collision Avoidance Through Reinforcement for UAVs [J].
Azzam, Rana ;
Chehadeh, Mohamad ;
Hay, Oussama Abdul ;
Humais, Muhammad Ahmed ;
Boiko, Igor ;
Zweiri, Yahya .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2024, 60 (03) :2614-2628
[44]   Monocular Camera-Based Complex Obstacle Avoidance via Efficient Deep Reinforcement Learning [J].
Ding, Jianchuan ;
Gao, Lingping ;
Liu, Wenxi ;
Piao, Haiyin ;
Pan, Jia ;
Du, Zhenjun ;
Yang, Xin ;
Yin, Baocai .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (02) :756-770
[45]   Taming an Autonomous Surface Vehicle for Path Following and Collision Avoidance Using Deep Reinforcement Learning [J].
Meyer, Eivind ;
Robinson, Haakon ;
Rasheed, Adil ;
San, Omer .
IEEE ACCESS, 2020, 8 :41466-41481
[46]   Collision avoidance for a small drone with a monocular camera using deep reinforcement learning in an indoor environment [J].
Kim M. ;
Kim J. ;
Jung M. ;
Oh H. .
Journal of Institute of Control, Robotics and Systems, 2020, 26 (06) :399-411
[47]   A Multi-Agent Deep Reinforcement Learning Approach for Practical Decentralized UAV Collision Avoidance [J].
Thumiger, Nicholas ;
Deghat, Mohammad .
IEEE CONTROL SYSTEMS LETTERS, 2022, 6 :2174-2179
[48]   Collaborative Collision Avoidance Approach for USVs Based on Multi-Agent Deep Reinforcement Learning [J].
Wang, Zhiwen ;
Chen, Pengfei ;
Chen, Linying ;
Mou, Junmin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (04) :4780-4794
[49]   Trophallaxis within a robotic swarm: bio-inspired communication among robots in a swarm [J].
Schmickl, T. ;
Crailsheim, K. .
AUTONOMOUS ROBOTS, 2008, 25 (1-2) :171-188
[50]   Trophallaxis within a robotic swarm: bio-inspired communication among robots in a swarm [J].
T. Schmickl ;
K. Crailsheim .
Autonomous Robots, 2008, 25 :171-188