Constrained Reinforcement Learning for Vehicle Motion Planning with Topological Reachability Analysis

被引:15
|
作者
Gu, Shangding [1 ]
Chen, Guang [1 ,2 ]
Zhang, Lijun [2 ]
Hou, Jing [2 ]
Hu, Yingbai [1 ]
Knoll, Alois [1 ]
机构
[1] Tech Univ Munich, Dept Informat, D-80333 Munich, Germany
[2] Tongji Univ, Sch Automot Studies, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金; 欧盟地平线“2020”;
关键词
motion planning; automated driving; reinforcement learning; reachability analysis; DECISION-MAKING; TREE;
D O I
10.3390/robotics11040081
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Rule-based traditional motion planning methods usually perform well with prior knowledge of the macro-scale environments but encounter challenges in unknown and uncertain environments. Deep reinforcement learning (DRL) is a solution that can effectively deal with micro-scale unknown and uncertain environments. Nevertheless, DRL is unstable and lacks interpretability. Therefore, it raises a new challenge: how to combine the effectiveness and overcome the drawbacks of the two methods while guaranteeing stability in uncertain environments. In this study, a multi-constraint and multi-scale motion planning method is proposed for automated driving with the use of constrained reinforcement learning (RL), named RLTT, and comprising RL, a topological reachability analysis used for vehicle path space (TPS), and a trajectory lane model (TLM). First, a dynamic model of vehicles is formulated; then, TLM is developed on the basis of the dynamic model, thus constraining RL action and state space. Second, macro-scale path planning is achieved through TPS, and in the micro-scale range, discrete routing points are achieved via RLTT. Third, the proposed motion planning method is designed by combining sophisticated rules, and a theoretical analysis is provided to guarantee the efficiency of our method. Finally, related experiments are conducted to evaluate the effectiveness of the proposed method; our method can reduce 19.9% of the distance cost in the experiments as compared to the traditional method. Experimental results indicate that the proposed method can help mitigate the gap between data-driven and traditional methods, provide better performance for automated driving, and facilitate the use of RL methods in more fields.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Reachability Constrained Reinforcement Learning
    Yu, Dongjie
    Ma, Haitong
    Li, Shengbo Eben
    Chen, Jianyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [2] Optimal Robot Motion Planning in Constrained Workspaces Using Reinforcement Learning
    Rousseas, Panagiotis
    Bechlioulis, Charalampos P.
    Kyriakopoulos, Kostas J.
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6917 - 6922
  • [3] Motion Planning of a Non-holonomic Vehicle in a Real Environment by Reinforcement Learning
    Gomez, M.
    Gayarre, L.
    Martinez-Marin, T.
    Sanchez, S.
    Meziat, D.
    BIO-INSPIRED SYSTEMS: COMPUTATIONAL AND AMBIENT INTELLIGENCE, PT 1, 2009, 5517 : 813 - +
  • [4] Motion Planning using Reinforcement Learning for Electric Vehicle Battery Optimization(EVBO)
    Soni, Himanshu
    Gupta, Vishu
    Kumar, Rajesh
    2019 INTERNATIONAL CONFERENCE ON POWER ELECTRONICS, CONTROL AND AUTOMATION (ICPECA-2019), 2019, : 11 - 16
  • [5] Harmonic-Based Optimal Motion Planning in Constrained Workspaces Using Reinforcement Learning
    Rousseas, Panagiotis
    Bechlioulis, Charalampos
    Kyriakopoulos, Kostas J.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 2005 - 2011
  • [6] Reachability Analysis in Stochastic Directed Graphs by Reinforcement Learning
    Possieri, Corrado
    Frasca, Mattia
    Rizzo, Alessandro
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (01) : 462 - 469
  • [7] Learning Topological Motion Primitives for Knot Planning
    Yan, Mengyuan
    Li, Gen
    Zhu, Yilin
    Bohg, Jeannette
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 9457 - 9464
  • [8] Harnessing Reinforcement Learning for Neural Motion Planning
    Jurgenson, Tom
    Tamar, Aviv
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [9] Federated reinforcement learning for generalizable motion planning
    Yuan, Zhenyuan
    Xu, Siyuan
    Zhu, Minghui
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 78 - 83
  • [10] Motion Planning by Reinforcement Learning for an Unmanned Aerial Vehicle in Virtual Open Space with Static Obstacles
    Kim, Sanghyun
    Park, Jongmin
    Yun, Jae-Kwan
    Seo, Jiwon
    2020 20TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2020, : 784 - 787