Efficient data use in incremental actor-critic algorithms

被引:7
作者
Cheng, Yuhu [1 ]
Feng, Huanting [1 ]
Wang, Xuesong [1 ]
机构
[1] China Univ Min & Technol, Sch Informat & Elect Engn, Xuzhou 221116, Jiangsu, Peoples R China
基金
中国国家自然科学基金; 高等学校博士学科点专项科研基金;
关键词
Actor-critic; Reinforcement learning; Incremental least-squares temporal difference; Recursive least-squares temporal difference; Policy evaluation; Function approximation;
D O I
10.1016/j.neucom.2011.11.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Actor-critic (AC) reinforcement learning methods are on-line approximations to policy iterations and have wide application in solving large-scale Markov decision and high-dimensional learning control problems. In order to overcome data inefficiency of incremental AC algorithms based on temporal difference (AC-TD), two new incremental AC algorithms (i.e., AC-RLSTD and AC-iLSTD) are proposed by applying a recursive least-squares TD (RLSTD(lambda)) algorithm and an incremental least-squares TO (iLSTD(lambda)) algorithm to the Critic evaluation, which can make more efficient use of data than TD. The Critic estimates a value-function using the RLSTD(lambda) or iLSTD(lambda) algorithm and the Actor updates the policy based on a regular gradient obtained by the TD error. The improvement in learning evaluation efficiency of the Critic will contribute to the improvement in policy learning performance of the Actor. Simulation results on the learning control of an inverted pendulum and a mountain-car problem illustrate the effectiveness of the two proposed AC algorithms in comparison to the AC-TD algorithm. In addition the AC-iLSTD, using a greedy selection mechanism, can perform much better than the AC-iLSTD using a random selection mechanism. In the simulation, the effect of different parameter settings of the eligibility trace on the learning performance of AC algorithms is analyzed. Furthermore, it is found that different initial values of the variance matrix in the AC-RLSTD algorithm should be chosen appropriately to obtain better performance for different learning problems. Crown Copyright (C) 2012 Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:346 / 354
页数:9
相关论文
共 17 条
  • [1] Bhatnagar S, 2007, Adv. Neural Inf. Process. Syst., V20, P105
  • [2] Natural actor-critic algorithms
    Bhatnagar, Shalabh
    Sutton, Richard S.
    Ghavamzadeh, Mohammad
    Lee, Mark
    [J]. AUTOMATICA, 2009, 45 (11) : 2471 - 2482
  • [3] Technical update: Least-squares temporal difference learning
    Boyan, JA
    [J]. MACHINE LEARNING, 2002, 49 (2-3) : 233 - 246
  • [4] Bradtke SJ, 1996, MACH LEARN, V22, P33, DOI 10.1007/BF00114723
  • [5] Geramifard A., 2006, P ADV NEUR INF PROC, P826
  • [6] Geramifard A., 2006, Proceedings of the Twenty-First National Conference on Articial Intelligence (AAAI-06), V21, P356
  • [7] Adaptive importance sampling for value function approximation in off-policy reinforcement learning
    Hachiya, Hirotaka
    Akiyama, Takayuki
    Sugiayma, Masashi
    Peters, Jan
    [J]. NEURAL NETWORKS, 2009, 22 (10) : 1399 - 1410
  • [8] On actor-critic algorithms
    Konda, VR
    Tsitsiklis, JN
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2003, 42 (04) : 1143 - 1166
  • [9] Least-squares policy iteration
    Lagoudakis, MG
    Parr, R
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (06) : 1107 - 1149
  • [10] Ljung L., 1987, THEORY PRACTICE RECU