Deep reinforcement learning using least-squares truncated temporal-difference

被引:3
作者
Ren, Junkai [1 ]
Lan, Yixing [1 ]
Xu, Xin [1 ]
Zhang, Yichuan [2 ]
Fang, Qiang [1 ]
Zeng, Yujun [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha, Peoples R China
[2] Xian Satellite Control Ctr, State Key Lab Astronaut Dynam, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; policy evaluation; temporal difference; value function approximation;
D O I
10.1049/cit2.12202
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Policy evaluation (PE) is a critical sub-problem in reinforcement learning, which estimates the value function for a given policy and can be used for policy improvement. However, there still exist some limitations in current PE methods, such as low sample efficiency and local convergence, especially on complex tasks. In this study, a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning ((LSTD)-D-2) is proposed. In (LSTD)-D-2, an adaptive truncation mechanism is designed, which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning (TD). Then, two feature pre-training methods are utilised to improve the approximation ability of (LSTD)-D-2. Furthermore, an Actor-Critic algorithm based on (LSTD)-D-2 and pre-trained feature representations (ACLPF) is proposed, where (LSTD)-D-2 is integrated into the critic network to improve learning-prediction efficiency. Comprehensive simulation studies were conducted on four robotic tasks, and the corresponding results illustrate the effectiveness of (LSTD)-D-2. The proposed ACLPF algorithm outperformed DQN, ACER and PPO in terms of sample efficiency and stability, which demonstrated that (LSTD)-D-2 can be applied to online learning control problems by incorporating it into the actor-critic architecture.
引用
收藏
页码:425 / 439
页数:15
相关论文
共 43 条
  • [1] [Anonymous], 2016, ARXIV160601540
  • [2] Baird L, 1995, P 12 INT C MACH LEAR, P30, DOI DOI 10.1016/B978-1-55860-377-6.50013-X
  • [3] Semantic-aware Active Perception for UAVs using Deep Reinforcement Learning
    Bartolomei, Luca
    Teixeira, Lucas
    Chli, Margarita
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 3101 - 3108
  • [4] Natural actor-critic algorithms
    Bhatnagar, Shalabh
    Sutton, Richard S.
    Ghavamzadeh, Mohammad
    Lee, Mark
    [J]. AUTOMATICA, 2009, 45 (11) : 2471 - 2482
  • [5] Bradtke SJ, 1996, MACH LEARN, V22, P33, DOI 10.1007/BF00114723
  • [6] Chebotar Y, 2021, PR MACH LEARN RES, V139
  • [7] Chen T, 2020, PR MACH LEARN RES, V119
  • [8] Dann C, 2014, J MACH LEARN RES, V15, P809
  • [9] Demmel J., 1997, Applied Numerical Linear Algebra
  • [10] Adaptive railway traffic control using approximate dynamic programming
    Ghasempour, Taha
    Heydecker, Benjamin
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2020, 113 : 91 - 107