Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

被引:0
|
作者
Hoang, Christopher [1 ]
Sohn, Sungryull [1 ,2 ]
Choi, Jongwook [1 ]
Carvalho, Wilka [1 ]
Lee, Honglak [1 ,2 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] LG AI Res, Ann Arbor, MI USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space. We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces and outperforms state-of-the-art baselines on long-horizon GCRL tasks(1).
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Modular Reinforcement Learning In Long-Horizon Manipulation Tasks
    Vavrecka, Michal
    Kriz, Jonas
    Sokovnin, Nikita
    Sejnova, Gabriela
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT X, 2024, 15025 : 299 - 312
  • [32] Goal-Conditioned Hierarchical Reinforcement Learning With High-Level Model Approximation
    Luo, Yu
    Ji, Tianying
    Sun, Fuchun
    Liu, Huaping
    Zhang, Jianwei
    Jing, Mingxuan
    Huang, Wenbing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 2705 - 2719
  • [33] Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning
    Hongyu Ding
    Yuanze Tang
    Qing Wu
    Bo Wang
    Chunlin Chen
    Zhi Wang
    IEEE/CAAJournalofAutomaticaSinica, 2023, 10 (12) : 2233 - 2247
  • [34] Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey
    Colas, Cedric
    Karch, Tristan
    Sigaud, Olivier
    Oudeyer, Pierre-Yves
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1159 - 1199
  • [35] Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression
    Ma, Yecheng Jason
    Yan, Jason
    Jayaraman, Dinesh
    Bastani, Osbert
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [36] Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning
    Ding, Hongyu
    Tang, Yuanze
    Wu, Qing
    Wang, Bo
    Chen, Chunlin
    Wang, Zhi
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (12) : 2233 - 2247
  • [37] Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey
    Colas C.
    Karch T.
    Sigaud O.
    Oudeyer P.-Y.
    Journal of Artificial Intelligence Research, 2022, 74 : 1159 - 1199
  • [38] Goal-Conditioned Reinforcement Learning within a Human-Robot Disassembly Environment
    Elguea-Aguinaco, Inigo
    Serrano-Munoz, Antonio
    Chrysostomou, Dimitrios
    Inziarte-Hidalgo, Ibai
    Bogh, Simon
    Arana-Arexolaleiba, Nestor
    APPLIED SCIENCES-BASEL, 2022, 12 (22):
  • [39] A Controllable Agent by Subgoals in Path Planning Using Goal-Conditioned Reinforcement Learning
    Lee, Gyeong Taek
    Kim, Kangjin
    IEEE ACCESS, 2023, 11 : 33812 - 33825
  • [40] Sample-Efficient Goal-Conditioned Reinforcement Learning via Predictive Information Bottleneck for Goal Representation Learning
    Zou, Qiming
    Suzuki, Einoshin
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9523 - 9529