Learning to Generalize With Object-Centric Agents in the Open World Survival Game Crafter

被引:0
作者
Stanic, Aleksandar [1 ]
Tang, Yujin [2 ]
Ha, David [3 ]
Schmidhuber, Jurgen [1 ,4 ]
机构
[1] SUPSI, IDSIA, USI, CH-6900 Lugano, Switzerland
[2] Google Brain, Tokyo 1066126, Japan
[3] Stabil AI, Tokyo 1066126, Japan
[4] KAUST, AI Initiat, Thuwal, Saudi Arabia
基金
欧洲研究理事会;
关键词
Games; Benchmark testing; Training; Iron; Coal; Vegetation; Diamonds; Crafter; generalization; object-centric agents; open world survival games; PPO; ENVIRONMENT;
D O I
10.1109/TG.2023.3276849
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning agents must generalize beyond their training experience. Prior work has focused mostly on identical training and evaluation environments. Starting from the recently introduced Crafter benchmark, a 2-D open world survival game, we introduce a new set of environments suitable for evaluating some agent's ability to generalize on previously unseen (numbers of) objects and to adapt quickly (meta-learning). In Crafter, the agents are evaluated by the number of unlocked achievements (such as collecting resources) when trained for 1 M steps. We show that current agents struggle to generalize, and introduce novel object-centric agents that improve over strong baselines. We also provide critical insights of general interest for future work on Crafter through several experiments. We show that careful hyperparameter tuning improves the PPO baseline agent by a large margin and that even feedforward agents can unlock almost all achievements by relying on the inventory display. We achieve a new state-of-the-art performance on the original Crafter environment. In addtion, when trained beyond 1 M steps, our tuned agents can unlock almost all achievements. We show that the recurrent PPO agents improve over feedforward ones, even with the inventory information removed. We introduce CrafterOOD, a set of 15 new environments that evaluate OOD generalization. On CrafterOOD, we show that the current agents fail to generalize, whereas our novel object-centric agents achieve state-of-the-art OOD generalization while also being interpretable. Our code is public.
引用
收藏
页码:384 / 395
页数:12
相关论文
共 89 条
  • [41] Kempka M, 2016, IEEE CONF COMPU INTE
  • [42] Kipf T., 2020, P INT C LEARN REPR
  • [43] Kipf T, 2022, Arxiv, DOI arXiv:2111.12594
  • [44] Kirk R, 2023, Arxiv, DOI arXiv:2111.09794
  • [45] Kosiorek AR, 2018, ADV NEUR IN, V31
  • [46] Koutník J, 2013, GECCO'13: PROCEEDINGS OF THE 2013 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, P1061
  • [47] Lee J, 2019, PR MACH LEARN RES, V97
  • [48] Locatello F., 2020, Adv. Neural Inf. Process. Syst., V33
  • [49] Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
    Machado, Marlos C.
    Bellemare, Marc G.
    Talvitie, Erik
    Veness, Joel
    Hausknecht, Matthew
    Bowling, Michael
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2018, 61 : 523 - 562
  • [50] Malsburg C.v.d., 1994, Models of neural networks: Temporal aspects of coding and information processing in biological systems, P95, DOI DOI 10.1007/978-1-4612-4320-5_2