Dynamic non-Bayesian decision making in multi-agent systems

被引:0
作者
Dov Monderer
Moshe Tennenholtz
机构
[1] Technion — Israel Institute of Technology,Faculty of Industrial Engineering and Management
来源
Annals of Mathematics and Artificial Intelligence | 1999年 / 25卷
关键词
Payoff; Joint Action; Payoff Function; Multiagent System; Competitive Ratio;
D O I
暂无
中图分类号
学科分类号
摘要
We consider a group of several non-Bayesian agents that can fully coordinate their activities and share their past experience in order to obtain a joint goal in face of uncertainty. The reward obtained by each agent is a function of the environment state but not of the action taken by other agents in the group. The environment state (controlled by Nature) may change arbitrarily, and the reward function is initially unknown. Two basic feedback structures are considered. In one of them — the perfect monitoring case — the agents are able to observe the previous environment state as part of their feedback, while in the other — the imperfect monitoring case — all that is available to the agents are the rewards obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our study refers to the competitive ratio criterion. It is shown that, for the imperfect monitoring case, there exists an efficient stochastic policy that ensures that the competitive ratio is obtained for all agents at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is also shown that if the agents are restricted only to deterministic policies then such a policy does not exist, even in the perfect monitoring case.
引用
收藏
页码:91 / 106
页数:15
相关论文
共 12 条
  • [1] Blackwell D.(1956)An analog of the minimax theorem for vector payoffs Pacific Journal of Mathematic 6 1-8
  • [2] Chernoff H.(1952)A measure of the asymptotic efficiency for tests of a hypothesis based on the sum of observations Annals of Mathematical Statistics 23 493-509
  • [3] Harsanyi J.C.(1967)Games with incomplete information played by bayesian players, Parts i, ii, iii Management Science 14 159-182
  • [4] Kaelbling L.P.(1996)Reinforcement learning: A survey Journal of Artificial Intelligence Research 4 237-258
  • [5] Littman M.L.(1997)Dynamic non-Bayesian decision-making Journal of Artificial Intelligence Research 7 231-248
  • [6] Moore A.W.(1995)Multi-entity models Machine Intelligence 14 63-88
  • [7] Monderer D.(1953)Stochastic games Proc. Nat. Acad. Sci. U.S.A. 39 1095-1100
  • [8] Tennenholtz M.(1984)A theory of the learnable Comm. ACM 27 1134-1142
  • [9] Moses Y.(undefined)undefined undefined undefined undefined-undefined
  • [10] Tennenholtz M.(undefined)undefined undefined undefined undefined-undefined