Hybrid Q-learning Algorithm About Cooperation in MAS

被引:3
|
作者
Chen, Wei [1 ]
Guo, Jing [1 ]
Li, Xiong [1 ]
Wang, Jie [1 ]
机构
[1] GuangDong Univ Technol, Automat Fac, Guangzhou 510006, Guangdong, Peoples R China
来源
CCDC 2009: 21ST CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-6, PROCEEDINGS | 2009年
关键词
CE-NNR Q-Learning; MAS; RoboCup 2D Soccer Simulation;
D O I
10.1109/CCDC.2009.5191990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In most cases, agent learning tends to be a good method for solving challenging problems in multi-agent System (MAS). Since the learning efficiency is significantly different according to the actions taken by each specific agent, suitable algorithms will play important roles in the answer of the mentioned problems in multi-agent system. Although many related work are addressed to different algorithms of agent learning, few of them could balance efficiency and accuracy. In this paper, a hybrid Q-learning algorithm named CE-NNR which is springed form the CE-Q learning and NNR Q-learning is presented. The algorithm is then well extended to RoboCup soccer simulation system and is proved to be reasonable with the experimental results arranged at the end of this paper.
引用
收藏
页码:3943 / 3947
页数:5
相关论文
empty
未找到相关数据