Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs

被引:0
|
作者
He, Jiafan [1 ]
Zhou, Dongruo [1 ]
Gu, Quanquan [1 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 | 2022年 / 151卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning Markov decision processes (MDPs) in the presence of the adversary is a challenging problem in reinforcement learning (RL). In this paper, we study RL in episodic MDPs with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping, and the reward function can change arbitrarily episode by episode. We propose an optimistic policy optimization algorithm POWERS and show that it can achieve (O) over tilde (dH root T) regret, where H is the length of the episode, T is the number of interaction with the MDP, and d is the dimension of the feature mapping. Furthermore, we also prove a matching lower bound of (Omega) over tilde (dH root T) up to logarithmic factors. Our key technical contributions are two-fold: (1) a new value function estimator based on importance weighting; and (2) a tighter confidence set for the transition kernel. They together lead to the nearly minimax optimal regret.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Near-optimal Reinforcement Learning in Factored MDPs
    Osband, Ian
    Van Roy, Benjamin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [2] Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback
    Cassel, Asaf
    Luo, Haipeng
    Rosenberg, Aviv
    Sotnikov, Dmitry
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 2024, 235
  • [3] Near-Optimal Interdiction of Factored MDPs
    Panda, Swetasudha
    Vorobeychik, Yevgeniy
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [4] Instance-Dependent Near-Optimal Policy Identification in Linear MDPs via Online Experiment Design
    Wagenmaker, Andrew
    Jamieson, Kevin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Dynamic Regret of Adversarial Linear Mixture MDPs
    Li, Long-Fei
    Zhao, Peng
    Zhou, Zhi-Hua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Near-optimal PAC bounds for discounted MDPs
    Lattimore, Tor
    Hutter, Marcus
    THEORETICAL COMPUTER SCIENCE, 2014, 558 : 125 - 143
  • [7] Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits
    Liu, Haolin
    Wei, Chen-Yu
    Zimmert, Julian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] Advanced Policy Learning Near-Optimal Regulation
    Ding Wang
    Xiangnan Zhong
    IEEE/CAA Journal of Automatica Sinica, 2019, 6 (03) : 743 - 749
  • [9] Advanced Policy Learning Near-Optimal Regulation
    Wang, Ding
    Zhong, Xiangnan
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2019, 6 (03) : 743 - 749
  • [10] Near-Optimal Sample Complexity Bounds for Constrained MDPs
    Vaswani, Sharan
    Yang, Lin F.
    Szepesvari, Csaba
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,