Partitioning in multi-agent reinforcement learning

被引:0
作者
Sun, R [1 ]
Peterson, T [1 ]
机构
[1] Univ Missouri, CECS, Columbia, MO 65211 USA
来源
FROM ANIMALS TO ANIMATS 6 | 2000年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses automatic partitioning of state spaces in complex reinforcement learning tasks using multiple agents, without a priori domain knowledge regarding task structures. Partitioning a state/input space into multiple regions helps to exploit differential characteristics of regions and differential characteristics of agents, thus facilitating learning and reducing the complexity of agents especially when function approximators are used. We develop a method for optimizing the partitioning of the space through experience without the use of a priori domain knowledge. The method is experimentally tested and compared to a number of other algorithms. As expected, we found that the multi-agent method with automatic partitioning outperformed single-agent learning.
引用
收藏
页码:325 / 332
页数:8
相关论文
共 29 条
[1]  
Bertsekas D., 1996, NEURO DYNAMIC PROGRA, V1st
[2]  
BLANZIERI E, 1996, P INT C MACH LEARN, P37
[3]  
BOYAN J, 1995, NEURAL INFORMATION P, P369
[4]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[5]  
Breiman L., 1984, BIOMETRICS, DOI DOI 10.2307/2530946
[6]  
CHRISMAN L, 1993, P AAAI, P183
[7]  
DAYAN P, 1993, NEURAL INFORMATION P
[8]  
DIETTERICH T, 1997, HIERARCHICAL REINFOR
[9]  
DORIGO M, 1995, 9501 U LIBR BRUX
[10]  
HUMPHRYS M, 1996, 362 U CAMBR COMP LAB