Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning

被引:31
作者
Na, Seongin [1 ]
Niu, Hanlin [1 ]
Lennox, Barry [1 ]
Arvin, Farshad [1 ]
机构
[1] Univ Manchester, Sch Engn, Dept Elect & Elect Engn, Swarm & Computat Intelligence Lab SwaCIL, Manchester M13 9PL, Lancs, England
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
Collision avoidance; Autonomous vehicles; Communication networks; Training; Task analysis; Robot sensing systems; Servers; autonomous vehicles; multi-agent systems; deep reinforcement learning; swarm robotics; MEMORY; ANT;
D O I
10.1109/TVT.2022.3145346
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Autonomous vehicles have been highlighted as a major growth area for future transportation systems and the deployment of large numbers of these vehicles is expected when safety and legal challenges are overcome. To meet the necessary safety standards, effective collision avoidance technologies are required to ensure that the number of accidents are kept to a minimum. As large numbers of autonomous vehicles, operating together on roads, can be regarded as a swarm system, we propose a bio-inspired collision avoidance strategy using virtual pheromones; an approach that has evolved effectively in nature over many millions of years. Previous research using virtual pheromones showed the potential of pheromone-based systems to maneuver a swarm of robots. However, designing an individual controller to maximise the performance of the entire swarm is a major challenge. In this paper, we propose a novel deep reinforcement learning (DRL) based approach that is able to train a controller that introduces collision avoidance behaviour. To accelerate training, we propose a novel sampling strategy called Highlight Experience Replay and integrate it with a Deep Deterministic Policy Gradient algorithm with noise added to the weights and biases of the artificial neural network to improve exploration. To evaluate the performance of the proposed DRL-based controller, we applied it to navigation and collision avoidance tasks in three different traffic scenarios. The experimental results showed that the proposed DRL-based controller outperformed the manually-tuned controller in terms of stability, effectiveness, robustness and ease of tuning process. Furthermore, the proposed Highlight Experience Replay method outperformed than the popular Prioritized Experience Replay sampling strategy by taking 27% of training time average over three stages.
引用
收藏
页码:2511 / 2526
页数:16
相关论文
共 50 条
[1]   Multigoal Visual Navigation With Collision Avoidance via Deep Reinforcement Learning [J].
Xiao, Wendong ;
Yuan, Liang ;
He, Li ;
Ran, Teng ;
Zhang, Jianbo ;
Cui, Jianping .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
[2]   Collision Avoidance in Pedestrian-Rich Environments With Deep Reinforcement Learning [J].
Everett, Michael ;
Chen, Yu Fan ;
How, Jonathan P. .
IEEE ACCESS, 2021, 9 :10357-10377
[3]   Behavior Modeling and Bio-Hybrid Systems: Using Reinforcement Learning to Enhance Cyborg Cockroach in Bio-Inspired Swarm Robotics [J].
Minh Triet, Le ;
Thinh, Nguyen Truong .
IEEE ACCESS, 2025, 13 :100119-100148
[4]   A Collision Avoidance Method Based on Deep Reinforcement Learning [J].
Feng, Shumin ;
Sebastian, Bijo ;
Ben-Tzvi, Pinhas .
ROBOTICS, 2021, 10 (02)
[5]   Unexpected Collision Avoidance Driving Strategy Using Deep Reinforcement Learning [J].
Kim, Myounghoe ;
Lee, Seongwon ;
Lim, Jaehyun ;
Choi, Jongeun ;
Kang, Seong Gu .
IEEE ACCESS, 2020, 8 :17243-17252
[6]   Deep Reinforcement Learning for Swarm Systems [J].
Huettenrauch, Maximilian ;
Sosic, Adrian ;
Neumann, Gerhard .
JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
[7]   A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot [J].
Meyer, Hanno Gerd ;
Bertrand, Olivier J. N. ;
Paskarbeit, Jan ;
Lindemann, Jens Peter ;
Schneider, Axel ;
Egelhaaf, Martin .
BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2016, 2016, 9793 :167-178
[8]   Bio-Inspired Real-Time Robot Vision for Collision Avoidance [J].
Okuno, Hirotsugu ;
Yagi, Tetsuya .
JOURNAL OF ROBOTICS AND MECHATRONICS, 2008, 20 (01) :68-74
[9]   A learning method for AUV collision avoidance through deep reinforcement learning [J].
Xu, Jian ;
Huang, Fei ;
Wu, Di ;
Cui, Yunfei ;
Yan, Zheping ;
Du, Xue .
OCEAN ENGINEERING, 2022, 260
[10]   Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning [J].
Song, Sirui ;
Saunders, Kirk ;
Yue, Ye ;
Liu, Jundong .
2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, :914-919