Double action Q-learning for obstacle avoidance in a dynamically changing environment

被引:0
作者
Ngai, DCK [1 ]
Yung, NHC [1 ]
机构
[1] Univ Hong Kong, Dept Elect & Elect Engn, Hong Kong, Hong Kong, Peoples R China
来源
2005 IEEE Intelligent Vehicles Symposium Proceedings | 2005年
关键词
Q-learning; reinforcement learning; temporal differences; obstacle avoidance;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new method for solving the reinforcement learning problem in a dynamically changing environment, as in vehicle navigation, in which the Markov Decision Process used in traditional reinforcement learning is modified so that the response of the environment is taken into consideration for determining the agent's next state. This is achieved by changing the action-value function to handle three parameters at a time, namely, the current state, action taken by the agent, and action taken by the environment. As it considers the actions by the agent and environment, it is termed "Double Action". Based on the Q-learning method, the proposed method is implemented and the update rule is modified to handle all of the three parameters. Preliminary results show that the proposed method has the sum of rewards (negative) 89.5% less than that of the traditional method. Apart form that, our new method also has the total number of collisions and mean steps used in one episode 89.5% and 15.5% lower than that of the traditional method respectively.
引用
收藏
页码:211 / 216
页数:6
相关论文
共 7 条
  • [1] Recent Advances in Hierarchical Reinforcement Learning
    Andrew G. Barto
    Sridhar Mahadevan
    [J]. Discrete Event Dynamic Systems, 2003, 13 (4) : 341 - 379
  • [2] Reinforcement learning: A survey
    Kaelbling, LP
    Littman, ML
    Moore, AW
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 : 237 - 285
  • [3] LAURENT G, 2002, P 2002 IEEE RSJ INT
  • [4] LAURENT G, 2001, P 2001 IEEE RSJ INT
  • [5] Puterman M.L., 2008, Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series in Probability and Statistics
  • [6] Sutton R. S., 1998, Reinforcement Learning: An Introduction, V22447
  • [7] WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1007/BF00992698