Q-LEARNING

被引:153
作者
WATKINS, CJCH [1 ]
DAYAN, P [1 ]
机构
[1] UNIV EDINBURGH,CTR COGNIT SCI,EDINBURGH EH8 9EH,SCOTLAND
关键词
Q-LEARNING; REINFORCEMENT LEARNING; TEMPORAL DIFFERENCES; ASYNCHRONOUS DYNAMIC PROGRAMMING;
D O I
10.1023/A:1022676722315
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for Q-learning based on that outlined in Watkins (1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.
引用
收藏
页码:279 / 292
页数:14
相关论文
共 14 条
  • [1] [Anonymous], 1989, LEARNING DELAYED REW
  • [2] [Anonymous], 1978, STOCHASTIC APPROXIMA
  • [3] BARTO AG, 1991, COINS9157 U MASS TEC
  • [4] BARTO AG, 1990, 1990 P CONN MOD SUMM
  • [5] Bellman Richard, 1962, APPL DYNAMIC PROGRAM
  • [6] CHAPMAN D, 1991, 1991 P INT JOINT C A, P726
  • [7] Lin L. - J., 1992, MACHINE LEARNING, V8
  • [8] MAHADEVAN, 1991, 1991 P NAT C AI, P768
  • [9] Ross S.M., 2014, INTRO STOCHASTIC DYN
  • [10] LEARNING CONTROL OF FINITE MARKOV-CHAINS WITH AN EXPLICIT TRADE-OFF BETWEEN ESTIMATION AND CONTROL
    SATO, M
    ABE, K
    TAKEDA, H
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1988, 18 (05): : 677 - 684