Hierarchical fuzzy ART for Q-learning and its application in air combat simulation

被引:5
作者
Zhou Y. [1 ]
Ma Y. [1 ]
Song X. [1 ]
Gong G. [1 ]
机构
[1] School of Automation Science and Electrical Engineering, Beihang University, XueYuan Road No. 37, HaiDian District, Beijing
来源
| 1600年 / World Scientific卷 / 08期
关键词
air combat simulation; Fuzzy ART; Q-learning; value function approximation;
D O I
10.1142/S1793962317500520
中图分类号
学科分类号
摘要
Value function approximation plays an important role in reinforcement learning (RL) with continuous state space, which is widely used to build decision models in practice. Many traditional approaches require experienced designers to manually specify the formulization of the approximating function, leading to the rigid, non-adaptive representation of the value function. To address this problem, a novel Q-value function approximation method named 'Hierarchical fuzzy Adaptive Resonance Theory' (HiART) is proposed in this paper. HiART is based on the Fuzzy ART method and is an adaptive classification network that learns to segment the state space by classifying the training input automatically. HiART begins with a highly generalized structure where the number of the category nodes is limited, which is beneficial to speed up the learning process at the early stage. Then, the network is refined gradually by creating the attached sub-networks, and a layered network structure is formed during this process. Based on this adaptive structure, HiART alleviates the dependence on expert experience to design the network parameter. The effectiveness and adaptivity of HiART are demonstrated in the Mountain Car benchmark problem with both fast learning speed and low computation time. Finally, a simulation application example of the one versus one air combat decision problem illustrates the applicability of HiART. © 2017 World Scientific Publishing Company.
引用
收藏
相关论文
共 37 条
  • [1] Watkins C.J.C.H., Dayan P., Q-learning, Mach. Learn., 8, 3-4, pp. 279-292, (1992)
  • [2] Kokar M.M., Reveliotis S.A., Reinforcement learning: Architectures and algorithms, Int. J. Intell. Syst., 8, 8, pp. 875-894, (1993)
  • [3] Martinez-Gil Lozano F.M., Fernandez F., Marl-ped: A multi-agent reinforcement learning based framework to simulate pedestrian groups, Simul. Modelling Pract. Theory, 47, pp. 259-275, (2014)
  • [4] Boubertakh H., Tadjine M., Glorennec P.-Y., A new mobile robot navigation method using fuzzy logic and a modified q-learning algorithm, J. Intell. Fuzzy Syst., 21, 1-2, pp. 113-119, (2010)
  • [5] Van Hasselt H., Reinforcement Learning in Continuous State and Action Spaces, in Reinforcement Learning, pp. 207-251, (2012)
  • [6] Fernandez F., Borrajo D., Two steps reinforcement learning, Int. J. Intell. Syst., 23, 2, pp. 213-245, (2008)
  • [7] Sutton R.S., Barto A.G., Reinforcement Learning: An Introduction, 1, (1998)
  • [8] Da Motta Salles Barreto A., Anderson C.W., Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artif. Intell., 172, 4, pp. 454-482, (2008)
  • [9] Russell S.J., Norvig P., Canny J.F., Malik J.M., Edwards D.D., Artificial Intelligence: A Modern Approach, 2, (2003)
  • [10] Parr R., Li L., Taylor G., Wakefield C.P., Littman M.L., An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning, Proc. 25th Int. Conf. Machine Learning, ACM, pp. 752-759, (2008)