Distributed Off-Policy Temporal Difference Learning Using Primal-Dual Method

被引:2
作者
Lee, Donghwan [1 ]
Kim, Do Wan [2 ]
Hu, Jianghai [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Elect Engn, Daejeon 34141, South Korea
[2] Hanbat Natl Univ, Dept Elect Engn, Daejeon 34158, South Korea
[3] Purdue Univ, Dept Elect & Comp Engn, W Lafayette, IN 47906 USA
基金
新加坡国家研究基金会; 美国国家科学基金会;
关键词
Convergence; Linear programming; Optimization; Markov processes; Symmetric matrices; Communication networks; Reinforcement learning; Machine learning; Sequential analysis; Multi-agent systems; Optimal control; Distributed processing; Reinforcement learning (RL); multi-agent systems; convergence; temporal difference (TD) learning; machine learning; primal-dual method; OPTIMIZATION; ALGORITHM; CONSENSUS;
D O I
10.1109/ACCESS.2022.3211395
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The goal of this paper is to provide theoretical analysis and additional insights on a distributed temporal-difference (TD)-learning algorithm for the multi-agent Markov decision processes (MDPs) via saddle-point viewpoints. The (single-agent) TD-learning is a reinforcement learning (RL) algorithm for evaluating a given policy based on reward feedbacks. In multi-agent settings, multiple RL agents concurrently behave, and each agent receives its local rewards. The goal of each agent is to evaluate a given policy corresponding to the global reward, which is an average of the local rewards by sharing learning parameters through random network communications. In this paper, we propose a distributed TD-learning based on saddle-point frameworks, and provide rigorous analysis of finite-time convergence of the algorithm and its solution based on tools in optimization theory. The results in this paper provide general and unified perspectives of the distributed policy evaluation problem, and theoretically complement the previous works.
引用
收藏
页码:107077 / 107094
页数:18
相关论文
共 51 条
[1]  
[Anonymous], 2009, P 26 ANN INT C MACH
[2]  
[Anonymous], 1999, Nonlinear programming
[3]  
Bercu Bernard, 2015, CONCENTRATION INEQUA
[4]  
Bertsekas D., 2015, Convex Optimization Algorithms
[5]  
Bertsekas D. P., 1996, Neuro-Dynamic Programming
[6]  
Bhatnagar S., 2012, Stochastic recursive algorithms for optimization: simultaneous perturbation methods, V434
[7]  
Boyd S., 2004, CONVEX OPTIMIZATION
[8]   Multiagent Fully Decentralized Value Function Learning With Linear Convergence Rates [J].
Cassano, Lucas ;
Yuan, Kun ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (04) :1497-1512
[9]  
Chen YC, 2016, Arxiv, DOI arXiv:1612.02516
[10]  
Ding DS, 2021, Arxiv, DOI arXiv:1908.02805