Navigation Based on Hybrid Decentralized and Centralized Training and Execution Strategy for Multiple Mobile Robots Reinforcement Learning

被引:3
作者
Dai, Yanyan [1 ]
Kim, Deokgyu [1 ]
Lee, Kidong [1 ]
机构
[1] Yeungnam Univ, Robot Dept, Gyongsan 38541, South Korea
关键词
multiple-robot navigation; hybrid DCTE strategy; reinforcement learning; DQN; effectiveness and efficiency;
D O I
10.3390/electronics13152927
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In addressing the complex challenges of path planning in multi-robot systems, this paper proposes a novel Hybrid Decentralized and Centralized Training and Execution (DCTE) Strategy, aimed at optimizing computational efficiency and system performance. The strategy solves the prevalent issues of collision and coordination through a tiered optimization process. The DCTE strategy commences with an initial decentralized path planning step based on Deep Q-Network (DQN), where each robot independently formulates its path. This is followed by a centralized collision detection the analysis of which serves to identify potential intersections or collision risks. Paths confirmed as non-intersecting are used for execution, while those in collision areas prompt a dynamic re-planning step using DQN. Robots treat each other as dynamic obstacles to circumnavigate, ensuring continuous operation without disruptions. The final step involves linking the newly optimized paths with the original safe paths to form a complete and secure execution route. This paper demonstrates how this structured strategy not only mitigates collision risks but also significantly improves the computational efficiency of multi-robot systems. The reinforcement learning time was significantly shorter, with the DCTE strategy requiring only 3 min and 36 s compared to 5 min and 33 s in the comparison results of the simulation section. The improvement underscores the advantages of the proposed method in enhancing the effectiveness and efficiency of multi-robot systems.
引用
收藏
页数:14
相关论文
共 30 条
[1]   ROS-based SLAM and Navigation for a Gazebo-Simulated Autonomous Quadrotor [J].
Alborzi, Y. ;
Jalal, B. Safari ;
Najafi, E. .
2020 21ST INTERNATIONAL CONFERENCE ON RESEARCH AND EDUCATION IN MECHATRONICS (REM), 2020,
[2]  
Blum P, 2022, Arxiv, DOI [arXiv:2211.05243, DOI 10.48550/ARXIV.2211.05243]
[3]   A comprehensive survey of multiagent reinforcement learning [J].
Busoniu, Lucian ;
Babuska, Robert ;
De Schutter, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02) :156-172
[4]  
Cipollone R, 2023, AAAI CONF ARTIF INTE, P7227
[5]   Sensing and Navigation for Multiple Mobile Robots Based on Deep Q-Network [J].
Dai, Yanyan ;
Yang, Seokho ;
Lee, Kidong .
REMOTE SENSING, 2023, 15 (19)
[6]   Emergent Cooperation and Strategy Adaptation in Multi-Agent Systems: An Extended Coevolutionary Theory with LLMs [J].
de Zarza, I. ;
de Curto, J. ;
Roig, Gemma ;
Manzoni, Pietro ;
Calafate, Carlos T. .
ELECTRONICS, 2023, 12 (12)
[7]   Precision Positioning for Smart Logistics Using Ultra-Wideband Technology-Based Indoor Navigation: A Review [J].
Elsanhoury, Mahmoud ;
Makela, Petteri ;
Koljonen, Janne ;
Valisuo, Petri ;
Shamsuzzoha, Ahm ;
Mantere, Timo ;
Elmusrati, Mohammed ;
Kuusniemi, Heidi .
IEEE ACCESS, 2022, 10 :44413-44445
[8]  
Escudie E, 2024, Arxiv, DOI arXiv:2401.17914
[9]  
Foerster JN, 2016, ADV NEUR IN, V29
[10]   A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots [J].
Giusti, Alessandro ;
Guzzi, Jerome ;
Ciresan, Dan C. ;
He, Fang-Lin ;
Rodriguez, Juan P. ;
Fontana, Flavio ;
Faessler, Matthias ;
Forster, Christian ;
Schmidhuber, Jurgen ;
Di Caro, Gianni ;
Scaramuzza, Davide ;
Gambardella, Luca M. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (02) :661-667