Swarm Reinforcement Learning Method Based on Hierarchical Q-Learning

被引:0
作者
Kuroe, Yasuaki [1 ]
Takeuchi, Kenya [1 ]
Maeda, Yutaka [1 ]
机构
[1] Kansai Univ, Fac Engn Sci, Suita, Osaka, Japan
来源
2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021) | 2021年
基金
日本学术振兴会;
关键词
reinforcement learning method; partially observed Markov decision process; hierarchical Q-learning; swarm intelligence;
D O I
10.1109/SSCI50451.2021.9659877
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In last decades the reinforcement learning method has attracted a great deal of attention and many studies have been done. However, this method is basically a trial-and-error scheme and it takes much computational time to acquire optimal strategies. Furthermore, optimal strategies may not be obtained for large and complicated problems with many states. To resolve these problems we have proposed the swarm reinforcement learning method, which is developed inspired by the multi-point search optimization methods. The Swarm reinforcement learning method has been extensively studied and its effectiveness has been confirmed for several problems, especially for Markov decision processes where the agents can fully observe the states of environments. In many real-world problems, however, the agents cannot fully observe the environments and they are usually partially observable Markov decision processes (POMDPs). The purpose of this paper is to develop a swarm reinforcement learning method which can deal with POMDPs. We propose a swarm reinforcement learning method based on HQ-learning, which is a hierarchical extension of Q-learning. It is shown through experiments that the proposed method can handle POMDPs and possesses higher performance than that of the original HQ-learning.
引用
收藏
页数:8
相关论文
共 17 条
[1]  
[Anonymous], SICE ICASE INT JOINT
[2]  
Iima H, 2010, IEEE SYS MAN CYBERN
[3]  
Iima H, 2015, IEEE C EVOL COMPUTAT, P3026, DOI 10.1109/CEC.2015.7257266
[4]   Swarm Reinforcement Learning Method for a Multi-Robot Formation Problem [J].
Iima, Hitoshi ;
Kuroe, Yasuaki .
2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, :2298-2303
[5]  
Iima H, 2011, IEEE SYS MAN CYBERN, P2173, DOI 10.1109/ICSMC.2011.6083999
[6]  
Iima H, 2009, LECT NOTES COMPUT SC, V5864, P169, DOI 10.1007/978-3-642-10684-2_19
[7]  
Iima H, 2008, IEEE SYS MAN CYBERN, P1109
[8]   Planning and acting in partially observable stochastic domains [J].
Kaelbling, LP ;
Littman, ML ;
Cassandra, AR .
ARTIFICIAL INTELLIGENCE, 1998, 101 (1-2) :99-134
[9]   Reinforcement learning: A survey [J].
Kaelbling, LP ;
Littman, ML ;
Moore, AW .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 :237-285
[10]  
Kennedy J, 2001, SWARM INTELL-US