Learning an Optimal Sampling Distribution for Efficient Motion Planning

被引:9
作者
Cheng, Richard [1 ]
Shankar, Krishna [2 ]
Burdick, Joel W. [1 ]
机构
[1] CALTECH, Pasadena, CA 91125 USA
[2] Toyota Res Inst, Toyota, Japan
来源
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2020年
关键词
D O I
10.1109/IROS45743.2020.9341245
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sampling-based motion planners (SBMP) are commonly used to generate motion plans by incrementally constructing a search tree through a robot's configuration space. For high degree-of-freedom systems, sampling is often done in a lower-dimensional space, with a steering function responsible for local planning in the higher-dimensional configuration space. However, for highly-redundant sytems with complex kinematics, this approach is problematic due to the high computational cost of evaluating the steering function, especially in cluttered environments. Therefore, having an efficient, informed sampler becomes critical to online robot operation. In this study, we develop a learning-based approach with policy improvement to compute an optimal sampling distribution for use in SBMPs. Motivated by the challenge of wholebody planning for a 31 degree-of-freedom mobile robot built by the Toyota Research Institute, we combine our learning-based approach with classical graph-search to obtain a constrained sampling distribution. Over multiple learning iterations, the algorithm learns a probability distribution weighting areas of low-cost and high probability of success, which a graph search algorithm then uses to obtain an optimal sampling distribution for the robot. On challenging motion planning tasks for the robot, we observe significant computational speed-up, fewer edge evaluations, and more efficient paths with minimal computational overhead. We show the efficacy of our approach with a number of experiments in whole-body motion planning.
引用
收藏
页码:7485 / 7492
页数:8
相关论文
共 37 条
[1]  
[Anonymous], 2011, INT J ROBOTICS RES
[2]  
Bajracharya Max, 2020, P 2020 IEEE INT C RO, P11039, DOI 10.1109/ICRA40945.2020
[3]  
Bency M. J., 2020, NEURAL PATH PLANNING
[4]  
Bhardwaj M., 2017, PMLR, P271
[5]  
Bhardwaj M, 2019, ROBOTICS: SCIENCE AND SYSTEMS XV
[6]   RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators From RL Policies [J].
Chiang, Hao-Tien Lewis ;
Hsu, Jasmine ;
Fiser, Marek ;
Tapia, Lydia ;
Faust, Aleksandra .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) :4298-4305
[7]  
Choudhury S, 2016, IEEE INT CONF ROBOT, P4207, DOI 10.1109/ICRA.2016.7487615
[8]  
Dellin CM, 2016, P I C AUTOMAT PLAN S, P459
[9]  
Gammell JD, 2015, IEEE INT CONF ROBOT, P3067, DOI 10.1109/ICRA.2015.7139620
[10]  
Gammell JD, 2014, IEEE INT C INT ROBOT, P2997, DOI 10.1109/IROS.2014.6942976