SPACECRAFT DECISION-MAKING AUTONOMY USING DEEP REINFORCEMENT LEARNING

被引:0
|
作者
Harris, Andrew [1 ]
Teil, Thibaud [1 ]
Schaub, Hanspeter [2 ]
机构
[1] Univ Colorado, Ann & HJ Smead Dept Aerosp Engn Sci, Boulder, CO 80309 USA
[2] Univ Colorado, Smead Dept Aerosp Engn Sci, Engn, Colorado Ctr Astrodynam Res, 431 UCB, Boulder, CO 80309 USA
关键词
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The high cost of space mission operations has motivated several space agencies to prioritize the development of autonomous spacecraft control techniques. "Learning" agents present one manner in which autonomous spacecraft can adapt to changing hardware capabilities, environmental parameters, or mission objectives while minimizing dependence on ground intervention. This work considers the frameworks and tools of deep reinforcement learning to address high-level mission planning and decision-making problems for autonomous spacecraft, under the assumption that sub-problems have been addressed through design. Two representative problems reflecting challenges of autonomous orbit insertion and science operations planning, respectively, are presented as Partially-Observable Markov Decision Processes (POMDP) and addressed with Deep Reinforcement Learners to demonstrate the benefits, pitfalls, considerations inherent to this approach. Sensitivity to initial conditions and learning strategy are discussed and analyzed. Results from selected problems demonstrate the use of reinforcement learning to improve or fine-tune prior policies within a mode-oriented paradigm while maintaining robustness to uncertain environmental parameters.
引用
收藏
页码:1757 / 1775
页数:19
相关论文
共 50 条
  • [31] Decision-making for Connected and Automated Vehicles in Chanllenging Traffic Conditions Using Imitation and Deep Reinforcement Learning
    Hu, Jinchao
    Li, Xu
    Hu, Weiming
    Xu, Qimin
    Hu, Yue
    INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2023, 24 (06) : 1589 - 1602
  • [32] Decision-making for Connected and Automated Vehicles in Chanllenging Traffic Conditions Using Imitation and Deep Reinforcement Learning
    Jinchao Hu
    Xu Li
    Weiming Hu
    Qimin Xu
    Yue Hu
    International Journal of Automotive Technology, 2023, 24 : 1589 - 1602
  • [33] Driving Tasks Transfer Using Deep Reinforcement Learning for Decision-Making of Autonomous Vehicles in Unsignalized Intersection
    Shu, Hong
    Liu, Teng
    Mu, Xingyu
    Cao, Dongpu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (01) : 41 - 52
  • [34] HMM for discovering decision-making dynamics using reinforcement learning experiments
    Guo, Xingche
    Zeng, Donglin
    Wang, Yuanjia
    BIOSTATISTICS, 2024, 26 (01)
  • [35] HMM for discovering decision-making dynamics using reinforcement learning experiments
    Guo, Xingche
    Zeng, Donglin
    Wang, Yuanjia
    BIOSTATISTICS, 2024,
  • [36] Uncertainty-based Decision Making Using Deep Reinforcement Learning
    Zhao, Xujiang
    Hu, Shu
    Cho, Jin-Hee
    Chen, Feng
    2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [37] Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach
    Bonjour, Trevor
    Haliem, Marina
    Alsalem, Aala
    Thomas, Shilpa
    Li, Hongyu
    Aggarwal, Vaneet
    Kejriwal, Mayank
    Bhargava, Bharat
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (06): : 1335 - 1344
  • [38] Reinforcement learning-based decision-making for spacecraft pursuit-evasion game in elliptical orbits
    Yu, Weizhuo
    Liu, Chuang
    Yue, Xiaokui
    CONTROL ENGINEERING PRACTICE, 2024, 153
  • [39] A Deep Reinforcement Learning Decision-Making Approach for Adaptive Cruise Control in Autonomous Vehicles
    Ghraizi, Dany
    Talj, Reine
    Francis, Clovis
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 71 - 78
  • [40] Decision-making Method for Transient Stability Emergency Control Based on Deep Reinforcement Learning
    Li H.
    Zhang P.
    Liu Z.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (05): : 144 - 152