Autonomic discovery of subgoals in hierarchical reinforcement learning

被引:0
作者
XIAO Ding
LI Yi-tong
SHI Chuan
机构
[1] SchoolofComputerScience,BeijingUniversityofPostsandTelecommunications
关键词
D O I
暂无
中图分类号
TP391.41 [];
学科分类号
080203 ;
摘要
Option is a promising method to discover the hierarchical structure in reinforcement learning(RL) for learning acceleration. The key to option discovery is about how an agent can find useful subgoals autonomically among the passing trails. By analyzing the agent's actions in the trails, useful heuristics can be found. Not only does the agent pass subgoals more frequently, but also its effective actions are restricted in subgoals. As a consequence, the subgoals can be deemed as the most matching action-restricted states in the paths. In the grid-world environment, the concept of the unique-direction value reflecting the action-restricted property was introduced to find the most matching action-restricted states. The unique-direction-value(UDV) approach is chosen to form options offline and online autonomically. Experiments show that the approach can find subgoals correctly. Thus the Q-learning with options found on both offline and online process can accelerate learning significantly.
引用
收藏
页码:94 / 104
页数:11
相关论文
共 2 条
  • [1] 面向Option的k-聚类Subgoal发现算法
    王本年
    高阳
    陈兆乾
    谢俊元
    陈世福
    [J]. 计算机研究与发展 , 2006, (05) : 851 - 855
  • [2] Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning[J] Richard S. Sutton;Doina Precup;Satinder Singh Artificial Intelligence 1999,