Neural Q-learning in Motion Planning for Mobile Robot

被引:1
作者
Qin, Zheng [1 ]
Gu, Jason [1 ]
机构
[1] Dalhousie Univ, Dept Elect & Comp Engn, Halifax, NS B3J 2X4, Canada
来源
2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3 | 2009年
关键词
Reinforcement learning; neural network; mobile robot; motion planning;
D O I
10.1109/ICAL.2009.5262570
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to solve the bad convergence property of neural network which is used to generalize reinforcement learning, the neural and case based Q-learning (NCQL) algorithm is proposed. The basic principle of NCQL is that the reinforcement learning is generalized by NN, and the convergence property and learning efficiency are promoted by cases. The detail elements of the learning algorithm are fulfilled according to the application of motion planning for mobile robot. The simulation results show the validility and practicability of the NCQL algorithm.
引用
收藏
页码:1024 / 1028
页数:5
相关论文
共 17 条
  • [1] ANDERSON C, 2000, CS00101
  • [2] [Anonymous], 2004, P IEEE INT C ROB AUT
  • [3] [Anonymous], 15 INT C NEUR INF PR
  • [4] BAGNELL J, 2001, P IEEE INT C ROB AUT
  • [5] BAXTER J, 2000, IEEE INT S CIRC SYST
  • [6] Bertsekas Dimitri, 1996, Neuro dynamic programming
  • [7] Boyan J. A., 1995, Advances in Neural Information Processing Systems 7, P369
  • [8] Carreras M., 2002, IEEE INT C AUT
  • [9] GASKETT C, 2002, THESIS AUSTR NATL U
  • [10] On actor-critic algorithms
    Konda, VR
    Tsitsiklis, JN
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2003, 42 (04) : 1143 - 1166