Trajectory Planning With Deep Reinforcement Learning in High-Level Action Spaces

被引:12
作者
Williams, Kyle R. [1 ]
Schlossman, Rachel [1 ]
Whitten, Daniel [1 ]
Ingram, Joe
Musuvathy, Srideep [1 ]
Pagan, James [1 ]
Williams, Kyle A. [1 ]
Green, Sam [2 ]
Patel, Anirudh [2 ]
Mazumdar, Anirban [3 ]
Parish, Julie [1 ]
机构
[1] Sandia Natl Labs, Albuquerque, CA 94551 USA
[2] Semiot Labs, Los Altos, CA 94022 USA
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
关键词
Trajectory; Planning; Trajectory planning; Training; Reinforcement learning; Optimization; Aerodynamics; OPTIMIZATION;
D O I
10.1109/TAES.2022.3218496
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This article presents a technique for trajectory planning based on parameterized high-level actions. These high-level actions are subtrajectories that have variable shape and duration. The use of high-level actions can improve the performance of guidance algorithms. Specifically, we show how the use of high-level actions improves the performance of guidance policies that are generated via reinforcement learning (RL). RL has shown great promise for solving complex control, guidance, and coordination problems but can still suffer from long training times and poor performance. This work shows how the use of high-level actions reduces the required number of training steps and increases the path performance of an RL-trained guidance policy. We demonstrate the method on a space-shuttle guidance example. We show the proposed method increases the path performance (latitude range) by 18% compared with a baseline RL implementation. Similarly, we show the proposed method achieves steady state during training with approximately 75% fewer training steps. We also show how the guidance policy enables effective performance in an obstacle field. Finally, this article develops a loss function term for policy-gradient-based deep RL, which is analogous to an antiwindup mechanism in feedback control. We demonstrate that the inclusion of this term in the underlying optimization increases the average policy return in our numerical example.
引用
收藏
页码:2513 / 2529
页数:17
相关论文
共 62 条
[1]  
Achiam Joshua, 2018, Spinning Up in Deep Reinforcement Learning
[2]  
Astrom K.J., 2010, Feedback Systems: An Introduction for Scientists and Engineers
[3]   Autonomous navigation of stratospheric balloons using reinforcement learning [J].
Bellemare, Marc G. ;
Candido, Salvatore ;
Castro, Pablo Samuel ;
Gong, Jun ;
Machado, Marlos C. ;
Moitra, Subhodeep ;
Ponda, Sameera S. ;
Wang, Ziyu .
NATURE, 2020, 588 (7836) :77-+
[4]  
Berner C., 2019, arXiv
[5]  
Bertsekas D., 2012, Dynamic Programming and Optimal Control, VI
[6]  
Bertsekas D. P., 2016, Nonlinear Programming
[7]  
Betts JT, 2010, ADV DES CONTROL, P411
[8]   Survey of numerical methods for trajectory optimization [J].
Betts, JT .
JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 1998, 21 (02) :193-207
[9]   Large-scale nonlinear programming using IPOPT: An integrating framework for enterprise-wide dynamic optimization [J].
Biegler, L. T. ;
Zavala, V. M. .
COMPUTERS & CHEMICAL ENGINEERING, 2009, 33 (03) :575-582
[10]   Optimal Control of Endoatmospheric Launch Vehicle Systems: Geometric and Computational Issues [J].
Bonalli, Riccardo ;
Herisse, Bruno ;
Trelat, Emmanuel .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (06) :2418-2433