SAMBA: safe model-based & active reinforcement learning

被引:0
作者
Alexander I. Cowen-Rivers
Daniel Palenicek
Vincent Moens
Mohammed Amin Abdullah
Aivar Sootla
Jun Wang
Haitham Bou-Ammar
机构
[1] Huawei Noah’s Ark Lab,
[2] Technical University Darmstadt,undefined
[3] University College London,undefined
来源
Machine Learning | 2022年 / 111卷
关键词
Gaussian process; Safe reinforcement learning; Active learning;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, we propose SAMBA, a novel framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics. Our method builds upon PILCO to enable active exploration using novel acquisition functions for out-of-sample Gaussian process evaluation optimised through a multi-objective problem that supports conditional-value-at-risk constraints. We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations. Our results show orders of magnitude reductions in samples and violations compared to state-of-the-art methods. Lastly, we provide intuition as to the effectiveness of the framework by a detailed analysis of our acquisition functions and safety constraints.
引用
收藏
页码:173 / 203
页数:30
相关论文
共 46 条
  • [1] Aswani A(2013)Provably safe and robust learning-based model predictive control Automatica 49 1216-1226
  • [2] Gonzalez H(1997)Nonlinear programming Journal of the Operational Research Society 48 334-6120
  • [3] Sastry SS(2017)Risk-constrained reinforcement learning with percentile risk criteria The Journal of Machine Learning Research 18 6070-121
  • [4] Tomlin C(2001)Nonlinear lagrangian theory for nonconvex optimization Journal of Optimization Theory and Applications 109 99-284
  • [5] Bertsekas DP(2008)Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies Journal of Machine Learning Research 9 235-224
  • [6] Chow Y(2005)Robust model predictive control of constrained linear systems with bounded disturbances Automatica 41 219-533
  • [7] Ghavamzadeh M(2015)Human-level control through deep reinforcement learning Nature 518 529-42
  • [8] Janson L(2000)Optimization of conditional value-at-risk Journal of Risk 2 21-55
  • [9] Pavone M(2001)A mathematical theory of communication ACM SIGMOBILE Mobile Computing and Communications Review 5 3-489
  • [10] Goh C(2016)Mastering the game of go with deep neural networks and tree search Nature 529 484-359