Integrating guidance into relational reinforcement learning

被引:55
作者
Driessens, K
Dzeroski, S
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium
[2] Jozef Stefan Inst, Dept Intelligent Syst, SI-1000 Ljubljana, Slovenia
关键词
reinforcement learning; relational learning; guided exploration;
D O I
10.1023/B:MACH.0000039779.47329.3a
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning, and Q-learning in particular, encounter two major problems when dealing with large state spaces. First, learning the Q-function in tabular form may be infeasible because of the excessive amount of memory needed to store the table, and because the Q-function only converges after each state has been visited multiple times. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. The first problem is often solved by learning a generalization of the encountered examples ( e. g., using a neural net or decision tree). Relational reinforcement learning (RRL) is such an approach; it makes Q-learning feasible in structural domains by incorporating a relational learner into Q-learning. The problem of sparse rewards has not been addressed for RRL. This paper presents a solution based on the use of "reasonable policies" to provide guidance. Different types of policies and different strategies to supply guidance through these policies are discussed and evaluated experimentally in several relational domains to show the merits of the approach.
引用
收藏
页码:271 / 304
页数:34
相关论文
共 45 条
[1]  
AHA DW, 1991, MACH LEARN, V6, P37, DOI 10.1007/BF00153759
[2]  
[Anonymous], 1993, 4.5: Programs for machine learning morgan kaufmann publishers inc
[3]  
Bain M., 1995, MACHINE INTELLIGENCE, V15
[4]  
BERTSEKAS, 1996, NEURODYNAMIC PROGRAM
[5]   Top-down induction of first-order logical decision trees [J].
Blockeel, H ;
De Raedt, L .
ARTIFICIAL INTELLIGENCE, 1998, 101 (1-2) :285-297
[6]  
Breiman L., 1998, CLASSIFICATION REGRE
[7]  
Chambers R.A., 1969, Computer Graphics: Techniques and Applications, P179
[8]  
Chapman D, 1991, IJCAI, V91, P726
[9]   FIRST-ORDER JK-CLAUSAL THEORIES ARE PAC-LEARNABLE [J].
DERAEDT, L ;
DZEROSKI, S .
ARTIFICIAL INTELLIGENCE, 1994, 70 (1-2) :375-392
[10]   Logical settings for concept-learning [J].
DeRaedt, L .
ARTIFICIAL INTELLIGENCE, 1997, 95 (01) :187-201