Speed Planning Based on Terrain-Aware Constraint Reinforcement Learning in Rugged Environments

被引:2
|
作者
Yang, Andong [1 ]
Li, Wei [1 ]
Hu, Yu [1 ]
机构
[1] Univ Chinese Acad Sci, Chinese Acad Sci, Inst Comp Technol, Res Ctr Intelligent Comp Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Planning; Robots; Semantics; Data mining; Neural networks; Reinforcement learning; Mobile robots; Speed planning; mobile robot; rugged environments; reinforcement learning; MODEL-PREDICTIVE CONTROL; NAVIGATION;
D O I
10.1109/LRA.2024.3354629
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Speed planning in rugged terrain poses challenges due to various constraints, such as traverse efficiency, dynamics, safety, and smoothness. This letter introduces a framework based on Constrained Reinforcement Learning (CRL) that considers all these constraints. In addition, extracting the terrain information as a constraint to be added to the CRL is also a barrier. In this letter, a terrain constraint extraction module is designed to quantify the semantic and geometric attributes of the terrain by estimating maximum safe speed. All networks are trained on simulators or datasets and eventually deployed on a real mobile robot. To continuously improve the planning performance and mitigate the error caused by the simulator-reality gap, we propose a feedback structure for detecting and preserving critical experiences during the testing process. The experiments in the simulator and the real robot demonstrate that our method can reduce the frequency of dangerous status by 45% and improve up to 71% smoothness.
引用
收藏
页码:2096 / 2103
页数:8
相关论文
共 50 条
  • [31] Risk-Aware Complete Coverage Path Planning Using Reinforcement Learning
    Wijegunawardana, I. D.
    Samarakoon, S. M. Bhagya P.
    Muthugala, M. A. Viraj J.
    Elara, Mohan Rajesh
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (04): : 2476 - 2488
  • [32] Resource-Aware Personalized Federated Learning Based on Reinforcement Learning
    Wu, Tingting
    Li, Xiao
    Gao, Pengpei
    Yu, Wei
    Xin, Lun
    Guo, Manxue
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 175 - 179
  • [33] Energy-Based Policy Constraint for Offline Reinforcement Learning
    Peng, Zhiyong
    Han, Changlin
    Liu, Yadong
    Zhou, Zongtan
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 335 - 346
  • [34] Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints
    Chen, Lienhung
    Jiang, Zhongliang
    Cheng, Long
    Knoll, Alois C.
    Zhou, Mingchuan
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [35] Path planning for mobile robot based on improved reinforcement learning algorithm
    Xu X.
    Yuan J.
    Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology, 2019, 27 (03): : 314 - 320
  • [36] A scheduling algorithm based on reinforcement learning for heterogeneous environments
    Lin, Ziniu
    Li, Chen
    Tian, Lihua
    Zhang, Bin
    APPLIED SOFT COMPUTING, 2022, 130
  • [37] Risk-Aware Travel Path Planning Algorithm Based on Reinforcement Learning during COVID-19
    Wang, Zhijian
    Yang, Jianpeng
    Zhang, Qiang
    Wang, Li
    SUSTAINABILITY, 2022, 14 (20)
  • [38] Traffic and Obstacle-Aware UAV Positioning in Urban Environments Using Reinforcement Learning
    Shafafi, Kamran
    Ricardo, Manuel
    Campos, Rui
    IEEE ACCESS, 2024, 12 : 188652 - 188663
  • [39] Reinforcement Learning based Method for Autonomous Navigation of Mobile Robots in Unknown Environments
    Roan Van Hoa
    Tran Duc Chuyen
    Nguyen Tung Lam
    Tran Ngoc Son
    Nguyen Duc Dien
    Vu Thi To Linh
    2020 INTERNATIONAL CONFERENCE ON ADVANCED MECHATRONIC SYSTEMS (ICAMECHS), 2020, : 266 - 269
  • [40] Towards Model-Based Reinforcement Learning for Industry-Near Environments
    Andersen, Per-Arne
    Goodwin, Morten
    Granmo, Ole-Christoffer
    ARTIFICIAL INTELLIGENCE XXXVI, 2019, 11927 : 36 - 49