A Bayesian reinforcement learning approach in markov games for computing near-optimal policies

被引:0
作者
Julio B. Clempner
机构
[1] Instituto Politécnico Nacional (National Polytechnic Institute),Escuela Superior de Física y Matemáticas (School of Physics and Mathematics
[2] Building 9,undefined
[3] Av. Instituto Politécnico Nacional,undefined
来源
Annals of Mathematics and Artificial Intelligence | 2023年 / 91卷
关键词
Reinforcement learning; Bayesian inference; Markov games with private information; Bayesian equilibrium; 91A10; 91A40; 91A26; 62C10; 60J20;
D O I
暂无
中图分类号
学科分类号
摘要
Bayesian Learning is an inference method designed to tackle exploration-exploitation trade-off as a function of the uncertainty of a given probability model from observations within the Reinforcement Learning (RL) paradigm. It allows the incorporation of prior knowledge, as probabilistic distributions, into the algorithms. Finding the resulting Bayes-optimal policies is notorious problem. We focus our attention on RL of a special kind of ergodic and controllable Markov games. We propose a new framework for computing the near-optimal policies for each agent, where it is assumed that the Markov chains are regular and the inverse of the behavior strategy is well defined. A fundamental result of this paper is the development of a theoretical method that, based on the formulation of a non-linear problem, computes the near-optimal adaptive-behavior strategies and policies of the game under some restrictions that maximize the expected reward. We prove that such behavior strategies and the policies satisfy the Bayesian-Nash equilibrium. Another important result is that the RL process learn a model through the interaction of the agents with the environment, and shows how the proposed method can finitely approximate and estimate the elements of the transition matrices and utilities maintaining an efficient long-term learning performance measure. We develop the algorithm for implementing this model. A numerical empirical example shows how to deploy the estimation process as a function of agent experiences.
引用
收藏
页码:675 / 690
页数:15
相关论文
共 40 条
  • [1] Asiain E(2019)Controller exploitation-exploration: A reinforcement learning architecture Soft Computing 23 3591-3604
  • [2] Clempner JB(2019)Allassonnière S,: Learning from both experts and data Entropy 21 1208-25
  • [3] Poznyak AS(2021)A markovian stackelberg game approach for computing an optimal dynamic mechanism Computational and Applied Mathematics 40 1-862
  • [4] Besson R(2021)A proximal/gradient approach for computing the nash equilibrium in controllable markov games J Optim Theory Appl 188 847-286
  • [5] Le Pennec E(2022)A dynamic mechanism design for controllable and ergodic markov games Computational Economics To be published 328 267-15
  • [6] Clempner JB(2018)A tikhonov regularized penalty function approach for solving polylinear programming problems J. Comput. Appl. Math. 95 1-128
  • [7] Clempner JB(2020)A nucleus for bayesian partially observable markov games: Joint observer and mechanism design Engineering Applications of Artificial Intelligence 9 118-464
  • [8] Clempner JB(2021)Analytical method for mechanism design in partially observable markov games Mathematics 147 457-492
  • [9] Clempner JB(2000)Survey of adaptive dual control methods IEEE Control Theoryand Applications 19 359-30
  • [10] Poznyak AS(2007)Bayesian policy gradient algorithms Neural Information Processing Systems 8 1-1231