Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

被引:0
|
作者
Hoang, Christopher [1 ]
Sohn, Sungryull [1 ,2 ]
Choi, Jongwook [1 ]
Carvalho, Wilka [1 ]
Lee, Honglak [1 ,2 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] LG AI Res, Ann Arbor, MI USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space. We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces and outperforms state-of-the-art baselines on long-horizon GCRL tasks(1).
引用
收藏
页数:13
相关论文
共 50 条
  • [41] MURM: Utilization of Multi-Views for Goal-Conditioned Reinforcement Learning in Robotic Manipulation
    Jang, Seongwon
    Jeong, Hyemi
    Yang, Hyunseok
    ROBOTICS, 2023, 12 (04)
  • [42] Deep Reinforcement Learning Based on Local GNN for Goal-Conditioned Deformable Object Rearranging
    Deng, Yuhong
    Xia, Chongkun
    Wang, Xueqian
    Chen, Lipeng
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1131 - 1138
  • [43] Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy
    Cao, Chenyang
    Yan, Zichen
    Lu, Renhao
    Tan, Junbo
    Wang, Xueqian
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 2838 - 2844
  • [44] Uncertainty-aware hierarchical reinforcement learning for long-horizon tasks
    Hu, Wenning
    Wang, Hongbin
    He, Ming
    Wang, Nianbin
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28555 - 28569
  • [45] Uncertainty-aware hierarchical reinforcement learning for long-horizon tasks
    Wenning Hu
    Hongbin Wang
    Ming He
    Nianbin Wang
    Applied Intelligence, 2023, 53 : 28555 - 28569
  • [46] Robotic Control in Adversarial and Sparse Reward Environments: A Robust Goal-Conditioned Reinforcement Learning Approach
    He X.
    Lv C.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 244 - 253
  • [47] Goal-Conditioned Q-learning as Knowledge Distillation
    Levine, Alexander
    Feizi, Soheil
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8500 - 8509
  • [48] Phasic Self-Imitative Reduction for Sparse-Reward Goal-Conditioned Reinforcement Learning
    Li, Yunfei
    Gao, Tian
    Yang, Jiaqi
    Xu, Huazhe
    Wu, Yi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [49] Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
    Gupta, Abhishek
    Kumar, Vikash
    Lynch, Corey
    Levine, Sergey
    Hausman, Karol
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [50] Hierarchical reinforcement learning method for long-horizon path planning of stratospheric airship
    Lv, Chao
    Zhu, Ming
    Guo, Xiao
    Ou, Jiajun
    Lou, Wenjie
    AEROSPACE SCIENCE AND TECHNOLOGY, 2025, 160