Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

被引:0
|
作者
Tirinzoni, Andrea [1 ]
Al-Marjani, Aymen [2 ]
Kaufmann, Emilie [3 ]
机构
[1] Meta AI, Paris, France
[2] ENS Lyon, UMPA, Lyon, France
[3] Univ Lille, CNRS, INRIA, Cent Lille,UMR 9189,CRIStAL, Lille, France
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
关键词
MULTIARMED BANDIT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an epsilon-optimal policy with probability 1 - delta. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision processes (MDPs). In this paper, we propose the first nearly matching (up to a horizon squared factor and logarithmic terms) upper and lower bounds on the sample complexity of PAC RL in deterministic episodic MDPs with finite state and action spaces. In particular, our bounds feature a new notion of sub-optimality gap for state-action pairs that we call the deterministic return gap. While our instance-dependent lower bound is written as a linear program, our algorithms are very simple and do not require solving such an optimization problem during learning. Their design and analyses employ novel ideas, including graph-theoretical concepts (minimum flows) and a new maximum-coverage exploration strategy.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] A PAC Learning Algorithm for LTL and Omega-Regular Objectives in MDPs
    Perez, Mateo
    Somenzi, Fabio
    Trivedi, Ashutosh
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21510 - 21517
  • [42] Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs
    He, Jiafan
    Zhou, Dongruo
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [43] Reinforcement learning for MDPs using temporal difference schemes
    Thomas, A
    Marcus, SI
    PROCEEDINGS OF THE 36TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 1997, : 577 - 583
  • [44] Near-optimal Regret Bounds for Reinforcement Learning
    Jaksch, Thomas
    Ortner, Ronald
    Auer, Peter
    JOURNAL OF MACHINE LEARNING RESEARCH, 2010, 11 : 1563 - 1600
  • [45] Exploiting Additive Structure in Factored MDPs for Reinforcement Learning
    Degris, Thomas
    Sigaud, Olivier
    Wuillemin, Pierre-Henri
    RECENT ADVANCES IN REINFORCEMENT LEARNING, 2008, 5323 : 15 - 26
  • [46] Optimal reinforcement learning near the edge of a synchronization transition
    Khoshkhou, Mahsa
    Montakhab, Afshin
    PHYSICAL REVIEW E, 2022, 105 (04)
  • [47] Near-Optimal Reinforcement Learning in Polynomial Time
    Michael Kearns
    Satinder Singh
    Machine Learning, 2002, 49 : 209 - 232
  • [48] Near-optimal reinforcement learning in polynomial time
    Kearns, M
    Singh, S
    MACHINE LEARNING, 2002, 49 (2-3) : 209 - 232
  • [49] Near-optimal regret bounds for reinforcement learning
    Jaksch, Thomas
    Ortner, Ronald
    Auer, Peter
    Journal of Machine Learning Research, 2010, 11 : 1563 - 1600
  • [50] Active Coverage for PAC Reinforcement Learning
    Al-Marjani, Aymen
    Tirinzoni, Andrea
    Kaufmann, Emilie
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195