Rule Abstraction and Transfer in Reinforcement Learning by Decision Tree
被引:0
作者:
Wu, Min
论文数: 0引用数: 0
h-index: 0
机构:
Univ Tokyo, Dept Precis Engn, Hongo 7-3-1, Tokyo, JapanUniv Tokyo, Dept Precis Engn, Hongo 7-3-1, Tokyo, Japan
Wu, Min
[1
]
Yamashita, Atsushi
论文数: 0引用数: 0
h-index: 0
机构:
Univ Tokyo, Fac Precis Engn, Tokyo, JapanUniv Tokyo, Dept Precis Engn, Hongo 7-3-1, Tokyo, Japan
Yamashita, Atsushi
[2
]
论文数: 引用数:
h-index:
机构:
Asama, Hajime
[1
]
机构:
[1] Univ Tokyo, Dept Precis Engn, Hongo 7-3-1, Tokyo, Japan
[2] Univ Tokyo, Fac Precis Engn, Tokyo, Japan
来源:
2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)
|
2012年
关键词:
D O I:
暂无
中图分类号:
TP301 [理论、方法];
学科分类号:
081202 ;
摘要:
Reinforcement learning agents store their knowledge such as state-action value in look-up tables. However, loop-up table requires large memory space when number of states become large. Learning from look-up table is tabularasa therefore is very slow. To overcome this disadvantage, generalization methods are used to abstract knowledge. In this paper, decision tree technology is used to enable the agent to represent abstract knowledge in rule from during learning progress and form rule base for each individual task.
引用
收藏
页码:529 / 534
页数:6
相关论文
共 13 条
[11]
Quinlan J. R., 1986, Machine Learning, V1, P81, DOI 10.1007/BF00116251
[12]
Sutton R.S., 2017, Introduction to reinforcement learning
[13]
Taylor ME, 2009, J MACH LEARN RES, V10, P1633