Robust Anytime Learning of Markov Decision Processes

被引:0
|
作者
Suilen, Marnix [1 ]
Simao, Thiago D. [1 ]
Parker, David [2 ]
Jansen, Nils [1 ]
机构
[1] Radboud Univ Nijmegen, Dept Software Sci, Nijmegen, Netherlands
[2] Univ Oxford, Dept Comp Sci, Oxford, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Markov decision processes (MDPs) are formal models commonly used in sequential decision-making. MDPs capture the stochasticity that may arise, for instance, from imprecise actuators via probabilities in the transition function. However, in data-driven applications, deriving precise probabilities from (limited) data introduces statistical errors that may lead to unexpected or undesirable outcomes. Uncertain MDPs (uMDPs) do not require precise probabilities but instead use so-called uncertainty sets in the transitions, accounting for such limited data. Tools from the formal verification community efficiently compute robust policies that provably adhere to formal specifications, like safety constraints, under the worst-case instance in the uncertainty set. We continuously learn the transition probabilities of an MDP in a robust anytime-learning approach that combines a dedicated Bayesian inference scheme with the computation of robust policies. In particular, our method (1) approximates probabilities as intervals, (2) adapts to new data that may be inconsistent with an intermediate model, and (3) may be stopped at any time to compute a robust policy on the uMDP that faithfully captures the data so far. Furthermore, our method is capable of adapting to changes in the environment. We show the effectiveness of our approach and compare it to robust policies computed on uMDPs learned by the UCRL2 reinforcement learning algorithm in an experimental evaluation on several benchmarks.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Reinforcement Learning in Robust Markov Decision Processes
    Lim, Shiau Hong
    Xu, Huan
    Mannor, Shie
    MATHEMATICS OF OPERATIONS RESEARCH, 2016, 41 (04) : 1325 - 1353
  • [2] Anytime Guarantees for Reachability in Uncountable Markov Decision Processes
    Technische Universität München, Germany
    不详
    Leibniz Int. Proc. Informatics, LIPIcs,
  • [3] Robust Markov Decision Processes
    Wiesemann, Wolfram
    Kuhn, Daniel
    Rustem, Berc
    MATHEMATICS OF OPERATIONS RESEARCH, 2013, 38 (01) : 153 - 183
  • [4] Learning Robust Policies for Uncertain Parametric Markov Decision Processes
    Rickard, Luke
    Abate, Alessandro
    Margellos, Kostas
    6TH ANNUAL LEARNING FOR DYNAMICS & CONTROL CONFERENCE, 2024, 242 : 876 - 889
  • [5] Distributionally Robust Markov Decision Processes
    Xu, Huan
    Mannor, Shie
    MATHEMATICS OF OPERATIONS RESEARCH, 2012, 37 (02) : 288 - 300
  • [6] Kernel-Based Reinforcement Learning in Robust Markov Decision Processes
    Lim, Shiau Hong
    Autef, Arnaud
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes
    Killian, Taylor
    Daulton, Samuel
    Konidaris, George
    Doshi-Velez, Finale
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [8] Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes
    Killian, Taylor W.
    Konidaris, George
    Doshi-Velez, Finale
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4949 - 4950
  • [9] Distributionally Robust Counterpart in Markov Decision Processes
    Yu, Pengqian
    Xu, Huan
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2016, 61 (09) : 2538 - 2543
  • [10] Robust Markov Decision Processes: Beyond Rectangularity
    Goyal, Vineet
    Grand-Clement, Julien
    MATHEMATICS OF OPERATIONS RESEARCH, 2023, 48 (01) : 203 - 226