Constrained Reinforcement Learning for Vehicle Motion Planning with Topological Reachability Analysis

被引:15
|
作者
Gu, Shangding [1 ]
Chen, Guang [1 ,2 ]
Zhang, Lijun [2 ]
Hou, Jing [2 ]
Hu, Yingbai [1 ]
Knoll, Alois [1 ]
机构
[1] Tech Univ Munich, Dept Informat, D-80333 Munich, Germany
[2] Tongji Univ, Sch Automot Studies, Shanghai 201804, Peoples R China
基金
欧盟地平线“2020”; 中国国家自然科学基金;
关键词
motion planning; automated driving; reinforcement learning; reachability analysis; DECISION-MAKING; TREE;
D O I
10.3390/robotics11040081
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Rule-based traditional motion planning methods usually perform well with prior knowledge of the macro-scale environments but encounter challenges in unknown and uncertain environments. Deep reinforcement learning (DRL) is a solution that can effectively deal with micro-scale unknown and uncertain environments. Nevertheless, DRL is unstable and lacks interpretability. Therefore, it raises a new challenge: how to combine the effectiveness and overcome the drawbacks of the two methods while guaranteeing stability in uncertain environments. In this study, a multi-constraint and multi-scale motion planning method is proposed for automated driving with the use of constrained reinforcement learning (RL), named RLTT, and comprising RL, a topological reachability analysis used for vehicle path space (TPS), and a trajectory lane model (TLM). First, a dynamic model of vehicles is formulated; then, TLM is developed on the basis of the dynamic model, thus constraining RL action and state space. Second, macro-scale path planning is achieved through TPS, and in the micro-scale range, discrete routing points are achieved via RLTT. Third, the proposed motion planning method is designed by combining sophisticated rules, and a theoretical analysis is provided to guarantee the efficiency of our method. Finally, related experiments are conducted to evaluate the effectiveness of the proposed method; our method can reduce 19.9% of the distance cost in the experiments as compared to the traditional method. Experimental results indicate that the proposed method can help mitigate the gap between data-driven and traditional methods, provide better performance for automated driving, and facilitate the use of RL methods in more fields.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Energy-Efficient Reinforcement Learning for Motion Planning of AUV
    Wen, Jiayi
    Zhu, Jingwei
    Lin, Yejin
    Zhang, Guichen
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON UNDERWATER SYSTEM TECHNOLOGY: THEORY AND APPLICATIONS, USYS, 2022,
  • [42] Humanoid motion planning of robotic arm based on reinforcement learning
    Yang A.
    Chen Y.
    Xu Y.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2021, 42 (12): : 136 - 145
  • [43] Hierarchical Task and Motion Planning through Deep Reinforcement Learning
    Newaz, Abdullah Al Redwan
    Alam, Tauhidul
    2021 FIFTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2021), 2021, : 100 - 105
  • [44] Iterative Reachability Estimation for Safe Reinforcement Learning
    Ganai, Milan
    Gong, Zheng
    Yu, Chenning
    Herbert, Sylvia
    Gao, Sicun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Reactive motion planning of manipulator by distributed learning agents using reinforcement learning
    Naruse, Keitarou
    Kakazu, Yukinori
    Nippon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, 1995, 61 (581): : 131 - 137
  • [46] Self-learning UAV Motion Planning Based on Meta Reinforcement Learning
    Wang, Minchun
    Jiang, Bo
    Xie, Jinhui
    2024 9TH INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY AND INFORMATION SCIENCE, ICETIS 2024, 2024, : 225 - 231
  • [47] Data-based reachability analysis for movement prediction of pedestrians and motion planning
    Hartmann, Michael
    Ferrara, Antonella
    Watzenig, Daniel
    2018 IEEE INTERNATIONAL CONFERENCE ON VEHICULAR ELECTRONICS AND SAFETY (ICVES 2018), 2018,
  • [48] Mapless Motion Planning System for an Autonomous Underwater Vehicle Using Policy Gradient-based Deep Reinforcement Learning
    Yushan Sun
    Junhan Cheng
    Guocheng Zhang
    Hao Xu
    Journal of Intelligent & Robotic Systems, 2019, 96 : 591 - 601
  • [49] Motion planning algorithm for non-holonomic autonomous underwater vehicle in disturbance using reinforcement learning and teaching method
    Kawano, H
    Ura, T
    2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, : 4032 - 4038
  • [50] Mapless Motion Planning System for an Autonomous Underwater Vehicle Using Policy Gradient-based Deep Reinforcement Learning
    Sun, Yushan
    Cheng, Junhan
    Zhang, Guocheng
    Xu, Hao
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2019, 96 (3-4) : 591 - 601