Trajectory Planning With Deep Reinforcement Learning in High-Level Action Spaces

被引:11
作者
Williams, Kyle R. [1 ]
Schlossman, Rachel [1 ]
Whitten, Daniel [1 ]
Ingram, Joe
Musuvathy, Srideep [1 ]
Pagan, James [1 ]
Williams, Kyle A. [1 ]
Green, Sam [2 ]
Patel, Anirudh [2 ]
Mazumdar, Anirban [3 ]
Parish, Julie [1 ]
机构
[1] Sandia Natl Labs, Albuquerque, CA 94551 USA
[2] Semiot Labs, Los Altos, CA 94022 USA
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
关键词
Trajectory; Planning; Trajectory planning; Training; Reinforcement learning; Optimization; Aerodynamics; OPTIMIZATION;
D O I
10.1109/TAES.2022.3218496
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This article presents a technique for trajectory planning based on parameterized high-level actions. These high-level actions are subtrajectories that have variable shape and duration. The use of high-level actions can improve the performance of guidance algorithms. Specifically, we show how the use of high-level actions improves the performance of guidance policies that are generated via reinforcement learning (RL). RL has shown great promise for solving complex control, guidance, and coordination problems but can still suffer from long training times and poor performance. This work shows how the use of high-level actions reduces the required number of training steps and increases the path performance of an RL-trained guidance policy. We demonstrate the method on a space-shuttle guidance example. We show the proposed method increases the path performance (latitude range) by 18% compared with a baseline RL implementation. Similarly, we show the proposed method achieves steady state during training with approximately 75% fewer training steps. We also show how the guidance policy enables effective performance in an obstacle field. Finally, this article develops a loss function term for policy-gradient-based deep RL, which is analogous to an antiwindup mechanism in feedback control. We demonstrate that the inclusion of this term in the underlying optimization increases the average policy return in our numerical example.
引用
收藏
页码:2513 / 2529
页数:17
相关论文
共 50 条
[21]   Deep Reinforcement Learning Based Computation Offloading and Trajectory Planning for Multi-UAV Cooperative Target Search [J].
Luo, Quyuan ;
Luan, Tom H. ;
Shi, Weisong ;
Fan, Pingzhi .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (02) :504-520
[22]   Hot rolling planning based on deep reinforcement learning [J].
Wang, Jingliang ;
Sun, Yanguang ;
Gu, Jiachen ;
Chen, Jinxiang .
2024 5TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING, ICAICE, 2024, :895-902
[23]   Layerwise Quantum Deep Reinforcement Learning for Joint Optimization of UAV Trajectory and Resource Allocation [J].
Silvirianti ;
Narottama, Bhaskara ;
Shin, Soo Young .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01) :430-443
[24]   Goal-Conditioned Hierarchical Reinforcement Learning With High-Level Model Approximation [J].
Luo, Yu ;
Ji, Tianying ;
Sun, Fuchun ;
Liu, Huaping ;
Zhang, Jianwei ;
Jing, Mingxuan ;
Huang, Wenbing .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) :2705-2719
[25]   Managing engineering systems with large state and action spaces through deep reinforcement learning [J].
Andriotis, C. P. ;
Papakonstantinou, K. G. .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2019, 191
[26]   Deep Reinforcement Learning for Jointly Resource Allocation and Trajectory Planning in UAV-Assisted Networks [J].
Jwaifel, Arwa Mahmoud ;
Van Do, Tien .
COMPUTATIONAL COLLECTIVE INTELLIGENCE, ICCCI 2023, 2023, 14162 :71-83
[27]   UAV Trajectory Planning in Wireless Sensor Networks for Energy Consumption Minimization by Deep Reinforcement Learning [J].
Zhu, Botao ;
Bedeer, Ebrahim ;
Nguyen, Ha H. ;
Barton, Robert ;
Henry, Jerome .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (09) :9540-9554
[28]   Energy-Optimal Trajectory Planning for Near-Space Solar-Powered UAV Based on Hierarchical Reinforcement Learning [J].
Xu, Tichao ;
Wu, Di ;
Meng, Wenyue ;
Ni, Wenjun ;
Zhang, Zijian .
IEEE ACCESS, 2024, 12 :21420-21436
[29]   Fast and slow curiosity for high-level exploration in reinforcement learning [J].
Nicolas Bougie ;
Ryutaro Ichise .
Applied Intelligence, 2021, 51 :1086-1107
[30]   Fast and slow curiosity for high-level exploration in reinforcement learning [J].
Bougie, Nicolas ;
Ichise, Ryutaro .
APPLIED INTELLIGENCE, 2021, 51 (02) :1086-1107