PALO bounds for reinforcement learning in partially observable stochastic games

被引:5
作者
Ceren, Roi [1 ]
He, Keyang [1 ]
Doshi, Prashant [1 ]
Banerjee, Bikramjit [2 ]
机构
[1] Univ Georgia, Dept Comp Sci, THINC Lab, Athens, GA 30602 USA
[2] Univ Southern Mississippi, Sch Comp Sci & Comp Engn, Hattiesburg, MS 39406 USA
关键词
Multiagent systems; Reinforcement learning; POMDP; POSG; FRAMEWORK;
D O I
10.1016/j.neucom.2020.08.054
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A partially observable stochastic game (POSG) is a general model for multiagent decision making under uncertainty. Perkins' Monte Carlo exploring starts for partially observable Markov decision process (POMDP) (MCES-P) integrates Monte Carlo exploring starts (MCES) into a local search of the policy space to offer an elegant template for model-free reinforcement learning in POSGs. However, multiagent reinforcement learning in POSGs is tremendously more complex than in single agent settings due to the heterogeneity of agents and discrepancy of their goals. In this article, we generalize reinforcement learning under partial observability to self-interested and cooperative multiagent settings under the POSG umbrella. We present three new templates for multiagent reinforcement learning in POSGs. MCES for interactive POMDP (MCESIP) extends MCES-P by maintaining predictions of the other agent's actions based on dynamic beliefs over models. MCES for multiagent POMDP (MCES-MP) generalizes MCES-P to the canonical multiagent POMDP framework, with a single policy mapping joint observations of all agents to joint actions. Finally, MCES for factored-reward multiagent POMDP (MCES-FMP) has each agent individually mapping joint observations to their own action. We use probabilistic approximate locally optimal (PALO) bounds to analyze sample complexity, thereby instantiating these templates to PALO learning. We promote sample efficiency by including a policy space pruning technique and evaluate the approaches on six benchmark domains as well as compare with the state-of-the-art techniques, which demonstrates that MCES-IP and MCES-FMP yield improved policies with fewer samples compared to the previous baselines. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:36 / 56
页数:21
相关论文
共 36 条
[1]  
Amato C, 2015, AAAI CONF ARTIF INTE, P1995
[2]  
[Anonymous], 2009, MULTIAGENT SYSTEMS A
[3]  
[Anonymous], 2010, Intelligence artificielle: Avec plus de 500 exercices
[4]  
[Anonymous], 2009, INT C AUT AG MULT SY
[5]  
[Anonymous], 2007, P 23 C UNCERTAINTY A, DOI [10.5555/3020488.3020489, DOI 10.5555/3020488.3020489]
[6]  
Banerjee B., 2012, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12, P1256
[7]  
Bernstein D. S., 2000, P 16 C UNCERTAINTY A, P32
[8]  
Cao Z., 2019, ARXIV E PRINTS
[9]  
Ceren R, 2016, AAMAS'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, P530
[10]  
Ciosek K, 2017, AAAI CONF ARTIF INTE, P1819