The Hanabi challenge: A new frontier for AI research

被引:133
作者
Bard, Nolan [1 ]
Foerster, Jakob N. [2 ]
Chandar, Sarath [3 ]
Burch, Neil [1 ]
Lanctot, Marc [1 ]
Song, H. Francis [4 ]
Parisotto, Emilio [5 ]
Dumoulin, Vincent [3 ]
Moitra, Subhodeep [3 ]
Hughes, Edward [4 ]
Dunning, Iain [4 ]
Mourad, Shibl [6 ]
Larochelle, Hugo [3 ]
Bellemare, Marc G. [3 ]
Bowling, Michael [1 ]
机构
[1] DeepMind, Edmonton, AB, Canada
[2] Univ Oxford, Oxford, England
[3] Google Brain, Montreal, PQ, Canada
[4] DeepMind, London, England
[5] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[6] DeepMind, Montreal, PQ, Canada
关键词
Multi-agent learning; Challenge paper; Reinforcement learning; Games; Theory of mind; Communication; Imperfect information; Cooperative; ARCADE LEARNING-ENVIRONMENT; COMPREHENSIVE SURVEY; REINFORCEMENT; GAME; GO; POKER;
D O I
10.1016/j.artint.2019.103216
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay with two to five players and imperfect information. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques for such theory of mind reasoning will not only be crucial for success in Hanabi, but also in broader collaborative efforts, especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques. (C) 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:19
相关论文
共 85 条
[61]  
Lewis M., 2017, P 2017 C EMP METH NA, P2433
[62]   Implicit Communication of Actionable Information in Human-AI teams [J].
Liang, Claire ;
Proft, Julia ;
Andersen, Erik ;
Knepper, Ross A. .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[63]  
LITTMAN M. L., 1994, P 11 INT C MACHINE L, P157, DOI DOI 10.1016/B978-1-55860-335-6.50027-1
[64]   Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents [J].
Machado, Marlos C. ;
Bellemare, Marc G. ;
Talvitie, Erik ;
Veness, Joel ;
Hausknecht, Matthew ;
Bowling, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2018, 61 :523-562
[65]   Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems [J].
Matignon, Laetitia ;
Laurent, Guillaume J. ;
Le Fort-Piat, Nadine .
KNOWLEDGE ENGINEERING REVIEW, 2012, 27 (01) :1-31
[66]  
Mnih V, 2016, PR MACH LEARN RES, V48
[67]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[68]   DeepStack: Expert-level artificial intelligence in heads-up no-limit poker [J].
Moravcik, Matej ;
Schmid, Martin ;
Burch, Neil ;
Lisy, Viliam ;
Morrill, Dustin ;
Bard, Nolan ;
Davis, Trevor ;
Waugh, Kevin ;
Johanson, Michael ;
Bowling, Michael .
SCIENCE, 2017, 356 (6337) :508-+
[69]  
Nowé A, 2012, ADAPT LEARN OPTIM, V12, P441
[70]  
Pacuit Eric., 2017, The Stanford Encyclopedia of Philosophy, Vfall 2017