Inverse reinforcement learning in contextual MDPs

被引:0
作者
Stav Belogolovsky
Philip Korsunsky
Shie Mannor
Chen Tessler
Tom Zahavy
机构
[1] Technion Israel Institute of Technology,Faculty of Electrical and Computer Engineering
[2] Nvidia Research,undefined
来源
Machine Learning | 2021年 / 110卷
关键词
Reinforcement learning; Contextual; Inverse;
D O I
暂无
中图分类号
学科分类号
摘要
We consider the task of Inverse Reinforcement Learning in Contextual Markov Decision Processes (MDPs). In this setting, contexts, which define the reward and transition kernel, are sampled from a distribution. In addition, although the reward is a function of the context, it is not provided to the agent. Instead, the agent observes demonstrations from an optimal policy. The goal is to learn the reward mapping, such that the agent will act optimally even when encountering previously unseen contexts, also known as zero-shot transfer. We formulate this problem as a non-differential convex optimization problem and propose a novel algorithm to compute its subgradients. Based on this scheme, we analyze several methods both theoretically, where we compare the sample complexity and scalability, and empirically. Most importantly, we show both theoretically and empirically that our algorithms perform zero-shot transfer (generalize to new and unseen contexts). Specifically, we present empirical experiments in a dynamic treatment regime, where the goal is to learn a reward function which explains the behavior of expert physicians based on recorded data of them treating patients diagnosed with sepsis.
引用
收藏
页码:2295 / 2334
页数:39
相关论文
共 34 条
[1]  
Beck A(2003)Mirror descent and nonlinear projected subgradient methods for convex optimization Operations Research Letters 31 167-175
[2]  
Teboulle M(2016)Personalizing mechanical ventilation for acute respiratory distress syndrome Journal of thoracic disease 8 E172-334
[3]  
Berngard SC(1997)Nonlinear programming Journal of the Operational Research Society 48 334-464
[4]  
Beitler JR(2014)Dynamic treatment regimes Annual Review of Statistics and its Application 1 447-1528
[5]  
Malhotra A(2016)A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization SIAM Journal on Optimization 26 1493-232
[6]  
Bertsekas DP(2016)Mimic-iii, a freely accessible critical care database Scientific Data 3 160035-566
[7]  
Chakraborty B(2002)Near-optimal reinforcement learning in polynomial time Machine Learning 49 209-407
[8]  
Murphy SA(2018)The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care Nature Medicine 24 1716-721
[9]  
Garber D(2017)Random gradient-free minimization of convex functions Foundations of Computational Mathematics 17 527-undefined
[10]  
Hazan E(2000)Algorithms for inverse reinforcement learning ICML 1 2-undefined