Navigation method for mobile robot based on hierarchical deep reinforcement learning

被引:0
|
作者
Wang T. [1 ]
Li A. [1 ]
Song H.-L. [1 ]
Liu W. [1 ]
Wang M.-H. [1 ]
机构
[1] School of Information Science and Technology, University of Science and Technology of China, Hefei
来源
Kongzhi yu Juece/Control and Decision | 2022年 / 37卷 / 11期
关键词
deep reinforcement learning; hierarchical deep reinforcement learning; mobile robot; navigation; obstacle avoidance; policy learning;
D O I
10.13195/j.kzyjc.2021.1013
中图分类号
学科分类号
摘要
In order to solve the problem that existing hierarchical navigation methods based on deep reinforcement learning (DRL) perform poorly in complex environments including the structures like long corridors and dead corners, we propose a navigation method for mobile robots based on option-based hierarchical deep reinforcement learning (HDRL). The framework of the proposed method consists of two level control models: a low level model is to obtain policies for avoiding obstacles and reaching the goal respectively, and a high-level behavior selection model is for automatically learning stable and reliable behavior selection policy, which does not rely on manually designed control rules. In addition, a training method for optimizing the obstacle avoidance control model is proposed, which makes the learned obstacle avoidance policy more suitable for the navigation task in complex environments. In comparison with existing DRL-based navigation methods, the proposed method achieves the highest navigation success rate in all simulated test environments used in this paper and shows better overall performance on other metrics, which demonstrates the proposed method can effectively solve the problem of poor navigation performance in complex environments and has strong generalization ability. Moreover, experiments in real-world environment also verify the potential application value of the proposed method. © 2022 Northeast University. All rights reserved.
引用
收藏
页码:2799 / 2807
页数:8
相关论文
共 26 条
  • [1] Jiang H G, Wang H, Yau W Y, Et al., A brief survey: Deep reinforcement learning in mobile robot navigation, The 15th IEEE Conference on Industrial Electronics and Applications, pp. 592-597, (2020)
  • [2] Quan H, Li Y S, Zhang Y., A novel mobile robot navigation method based on deep reinforcement learning, International Journal of Advanced Robotic Systems, 17, 3, (2020)
  • [3] Sun H H, Hu C H, Zhang J G., Deep reinforcement learning for motion planning of mobile robots, Control and Decision, 36, 6, pp. 1281-1292, (2021)
  • [4] Fan T X, Long P X, Liu W X, Et al., Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios, The International Journal of Robotics Research, 39, 7, pp. 856-892, (2020)
  • [5] Xiao X S, Liu B, Warnell G, Et al., Motion planning and control for mobile robot navigation using machine learning: A survey, (2020)
  • [6] Dong H, Yang J, Li S B, Et al., Research progress of robot motion control based on deep reinforcement learning, Control and Decision, 37, 2, pp. 278-292, (2022)
  • [7] Tai L, Paolo G, Liu M., Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation, The IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 31-36, (2017)
  • [8] Zhelo O, Zhang J W, Tai L, Et al., Curiosity-driven exploration for mapless navigation with deep reinforcement learning, (2018)
  • [9] Zhang W, Zhang Y F., Behavior switch for DRL-based robot navigation, IEEE 15th International Conference on Control and Automation, pp. 284-288, (2019)
  • [10] Yang R, Yan J P, Li X., Survey of sparse reward algorithms in reinforcement learning — theory and experiment, CAAI Transactions on Intelligent Systems, 15, 5, pp. 888-899, (2020)