Speed Planning Based on Terrain-Aware Constraint Reinforcement Learning in Rugged Environments

被引:2
|
作者
Yang, Andong [1 ]
Li, Wei [1 ]
Hu, Yu [1 ]
机构
[1] Univ Chinese Acad Sci, Chinese Acad Sci, Inst Comp Technol, Res Ctr Intelligent Comp Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Planning; Robots; Semantics; Data mining; Neural networks; Reinforcement learning; Mobile robots; Speed planning; mobile robot; rugged environments; reinforcement learning; MODEL-PREDICTIVE CONTROL; NAVIGATION;
D O I
10.1109/LRA.2024.3354629
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Speed planning in rugged terrain poses challenges due to various constraints, such as traverse efficiency, dynamics, safety, and smoothness. This letter introduces a framework based on Constrained Reinforcement Learning (CRL) that considers all these constraints. In addition, extracting the terrain information as a constraint to be added to the CRL is also a barrier. In this letter, a terrain constraint extraction module is designed to quantify the semantic and geometric attributes of the terrain by estimating maximum safe speed. All networks are trained on simulators or datasets and eventually deployed on a real mobile robot. To continuously improve the planning performance and mitigate the error caused by the simulator-reality gap, we propose a feedback structure for detecting and preserving critical experiences during the testing process. The experiments in the simulator and the real robot demonstrate that our method can reduce the frequency of dangerous status by 45% and improve up to 71% smoothness.
引用
收藏
页码:2096 / 2103
页数:8
相关论文
共 50 条
  • [21] Node Constraint Routing Algorithm based on Reinforcement Learning
    Dong, Weihang
    Zhang, Wei
    Yang, Wei
    PROCEEDINGS OF 2016 IEEE 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP 2016), 2016, : 1752 - 1756
  • [22] UAV Path Planning and Obstacle Avoidance Based on Reinforcement Learning in 3D Environments
    Tu, Guan-Ting
    Juang, Jih-Gau
    ACTUATORS, 2023, 12 (02)
  • [23] Event-triggered reconfigurable reinforcement learning motion-planning approach for mobile robot in unknown dynamic environments
    Sun, Huihui
    Zhang, Changchun
    Hu, Chunhe
    Zhang, Junguo
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [24] Robot path planning in dynamic environment based on reinforcement learning
    庄晓东
    孟庆春
    魏天滨
    王旭柱
    谭锐
    李筱菁
    Journal of Harbin Institute of Technology, 2001, (03) : 253 - 255
  • [25] Research on path planning of robot based on deep reinforcement learning
    Liu, Feng
    Chen, Chang
    Li, Zhihua
    Guan, Zhi-Hong
    Wang, Hua O.
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 3730 - 3734
  • [26] Reinforcement Learning-Based Energy-Aware Area Coverage for Reconfigurable hRombo Tiling Robot
    Le, Anh Vu
    Parween, Rizuwana
    Kyaw, Phone Thiha
    Mohan, Rajesh Elara
    Minh, Tran Hoang Quang
    Borusu, Charan Satya Chandra Sairam
    IEEE ACCESS, 2020, 8 : 209750 - 209761
  • [27] Top-aware reinforcement learning based recommendation
    Liu, Feng
    Tang, Ruiming
    Guo, Huifeng
    Li, Xutao
    Ye, Yunming
    He, Xiuqiang
    NEUROCOMPUTING, 2020, 417 (255-269) : 255 - 269
  • [28] Transmission Expansion Planning Based on Reinforcement Learning
    Wang Y.
    Hu S.
    Song Y.
    Jiang L.
    Shen L.
    Dianwang Jishu/Power System Technology, 2021, 45 (07): : 2829 - 2838
  • [29] Synergistic Task and Motion Planning With Reinforcement Learning-Based Non-Prehensile Actions
    Liu, Gaoyuan
    de Winter, Joris
    Steckelmacher, Denis
    Hota, Roshan Kumar
    Nowe, Ann
    Vanderborght, Bram
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2764 - 2771
  • [30] DAGA: Dynamics Aware Reinforcement Learning With Graph-Based Rapid Adaptation
    Ji, Jingtian
    Nie, Buqing
    Gao, Yue
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (04) : 2189 - 2196