Speed Planning Based on Terrain-Aware Constraint Reinforcement Learning in Rugged Environments

被引:2
|
作者
Yang, Andong [1 ]
Li, Wei [1 ]
Hu, Yu [1 ]
机构
[1] Univ Chinese Acad Sci, Chinese Acad Sci, Inst Comp Technol, Res Ctr Intelligent Comp Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Planning; Robots; Semantics; Data mining; Neural networks; Reinforcement learning; Mobile robots; Speed planning; mobile robot; rugged environments; reinforcement learning; MODEL-PREDICTIVE CONTROL; NAVIGATION;
D O I
10.1109/LRA.2024.3354629
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Speed planning in rugged terrain poses challenges due to various constraints, such as traverse efficiency, dynamics, safety, and smoothness. This letter introduces a framework based on Constrained Reinforcement Learning (CRL) that considers all these constraints. In addition, extracting the terrain information as a constraint to be added to the CRL is also a barrier. In this letter, a terrain constraint extraction module is designed to quantify the semantic and geometric attributes of the terrain by estimating maximum safe speed. All networks are trained on simulators or datasets and eventually deployed on a real mobile robot. To continuously improve the planning performance and mitigate the error caused by the simulator-reality gap, we propose a feedback structure for detecting and preserving critical experiences during the testing process. The experiments in the simulator and the real robot demonstrate that our method can reduce the frequency of dangerous status by 45% and improve up to 71% smoothness.
引用
收藏
页码:2096 / 2103
页数:8
相关论文
共 50 条
  • [11] Bayesian reinforcement learning for navigation planning in unknown environments
    Alali, Mohammad
    Imani, Mahdi
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [12] Planning approaches to constraint-aware navigation in dynamic environments
    Ninomiya, Kai
    Kapadia, Mubbasir
    Shoulson, Alexander
    Garcia, Francisco
    Badler, Norman
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2015, 26 (02) : 119 - 139
  • [13] Path-Planning Method Based on Reinforcement Learning for Cooperative Two-Crane Lift Considering Load Constraint
    An, Jianqi
    Ou, Huimin
    Wu, Min
    Chen, Xin
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (04): : 2913 - 2923
  • [14] Robot path planning algorithm based on reinforcement learning
    Zhang F.
    Li N.
    Yuan R.
    Fu Y.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2018, 46 (12): : 65 - 70
  • [15] Reinforcement learning-based dynamic obstacle avoidance and integration of path planning
    Choi, Jaewan
    Lee, Geonhee
    Lee, Chibum
    INTELLIGENT SERVICE ROBOTICS, 2021, 14 (05) : 663 - 677
  • [16] Reinforcement learning-based dynamic obstacle avoidance and integration of path planning
    Jaewan Choi
    Geonhee Lee
    Chibum Lee
    Intelligent Service Robotics, 2021, 14 : 663 - 677
  • [17] Planning-integrated Policy for Efficient Reinforcement Learning in Sparse-reward Environments
    Wulur, Christoper
    Weber, Cornelius
    Wermter, Stefan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [18] Mobile Robot Path Planning in Dynamic Environments Through Globally Guided Reinforcement Learning
    Wang, Binyu
    Liu, Zhe
    Li, Qingbiao
    Prorok, Amanda
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6932 - 6939
  • [19] Terrain-Adaptive Hierarchical Speed Planning Method for Off-Road Environments
    Guo, Congshuai
    Liu, Hui
    Nie, Shida
    Zhang, Fawang
    Wan, Hang
    Han, Lijin
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (12) : 18363 - 18379
  • [20] Adaptive speed planning for Unmanned Vehicle Based on Deep Reinforcement Learning
    Liu, Hao
    Shen, Yi
    Zhou, Wenjing
    Zou, Yuelin
    Zhou, Chang
    He, Shuyao
    2024 5TH INTERNATIONAL CONFERENCE ON MECHATRONICS TECHNOLOGY AND INTELLIGENT MANUFACTURING, ICMTIM 2024, 2024, : 642 - 645