Hybrid Q-learning Algorithm About Cooperation in MAS

被引:3
|
作者
Chen, Wei [1 ]
Guo, Jing [1 ]
Li, Xiong [1 ]
Wang, Jie [1 ]
机构
[1] GuangDong Univ Technol, Automat Fac, Guangzhou 510006, Guangdong, Peoples R China
来源
CCDC 2009: 21ST CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-6, PROCEEDINGS | 2009年
关键词
CE-NNR Q-Learning; MAS; RoboCup 2D Soccer Simulation;
D O I
10.1109/CCDC.2009.5191990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In most cases, agent learning tends to be a good method for solving challenging problems in multi-agent System (MAS). Since the learning efficiency is significantly different according to the actions taken by each specific agent, suitable algorithms will play important roles in the answer of the mentioned problems in multi-agent system. Although many related work are addressed to different algorithms of agent learning, few of them could balance efficiency and accuracy. In this paper, a hybrid Q-learning algorithm named CE-NNR which is springed form the CE-Q learning and NNR Q-learning is presented. The algorithm is then well extended to RoboCup soccer simulation system and is proved to be reasonable with the experimental results arranged at the end of this paper.
引用
收藏
页码:3943 / 3947
页数:5
相关论文
共 50 条
  • [1] Cooperation in evolutionary games incorporated with extended Q-learning algorithm
    Long, Pinduo
    Dai, Qionglin
    Li, Haihong
    Yang, Junzhong
    INTERNATIONAL JOURNAL OF MODERN PHYSICS C, 2025, 36 (03):
  • [2] Controlling Sequential Hybrid Evolutionary Algorithm by Q-Learning
    Zhang, Haotian
    Sun, Jianyong
    Back, Thomas
    Zhang, Qingfu
    Xu, Zongben
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2023, 18 (01) : 84 - 103
  • [3] A Hybrid Fuzzy Q-Learning algorithm for robot navigation
    Gordon, Sean W.
    Reyes, Napoleon H.
    Barczak, Andre
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2625 - 2631
  • [4] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [5] Study of Cooperation Strategy of Robot Based on Parallel Q-Learning Algorithm
    Wang, Shuda
    Si, Feng
    Yang, Jing
    Wang, Shuoning
    Yang, Jun
    INTELLIGENT ROBOTICS AND APPLICATIONS, PT I, PROCEEDINGS, 2008, 5314 : 633 - 642
  • [6] Hybrid control for robot navigation - A hierarchical Q-learning algorithm
    Chen, Chunlin
    Li, Han-Xiong
    Dong, Daoyi
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2008, 15 (02) : 37 - 47
  • [7] Deep Q-Learning with Phased Experience Cooperation
    Wang, Hongbo
    Zeng, Fanbing
    Tu, Xuyan
    COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, CHINESECSCW 2019, 2019, 1042 : 752 - 765
  • [8] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [9] Hybrid Path Planning Algorithm of the Mobile Agent Based on Q-Learning
    Gao, Tengteng
    Li, Caihong
    Liu, Guoming
    Guo, Na
    Wang, Di
    Li, Yongdi
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2022, 56 (02) : 130 - 142
  • [10] Cooperation and negotiation in MAS, hybrid intelligent learning algorithm and application in robot soccer
    Zhang, Shu-Jun
    Meng, Qing-Chun
    Song, Chang-Hong
    Zhang, Yan
    Zhang, Wen
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2003, 35 (09): : 1083 - 1085