Corruption-Robust Exploration in Episodic Reinforcement Learning

被引:0
|
作者
Lykouris, Thodoris [1 ]
Simchowitz, Max [2 ]
Slivkins, Aleksandrs [3 ]
Sun, Wen [4 ]
机构
[1] MIT, Sloan Sch Management, Cambridge, MA 02139 USA
[2] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[3] Microsoft Res Lab, New York, NY 10012 USA
[4] Cornell Univ, Dept Comp Sci, Ithaca, NY 14850 USA
关键词
reinforcement learning; bandit feedback; exploration; robustness; regret; MULTIARMED BANDIT;
D O I
10.1287/moor.2021.0202
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We initiate the study of episodic reinforcement learning (RL) under adversarial corruptions in both the rewards and the transition probabilities of the underlying system, extending recent results for the special case of multiarmed bandits. We provide a framework that modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on optimism in the face of uncertainty by complementing them with principles from action elimination. Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms that (a) attain near -optimal regret in the absence of corruptions and (b) adapt to unknown levels of corruption, enjoying regret guarantees that degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) and linear Markov decision process settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee that accommodates any deviation from purely independent and identically distributed transitions in the bandit -feedback model for episodic reinforcement learning.
引用
收藏
页数:29
相关论文
共 50 条
  • [41] Adaptive Discretization for Episodic Reinforcement Learning in Metric Spaces
    Sinclair, Sean R.
    Banerjee, Siddhartha
    Yu, Christina Lee
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2019, 3 (03)
  • [42] Adaptive Discretization for Episodic Reinforcement Learning in Metric Spaces
    Sinclair S.R.
    Banerjee S.
    Lee Yu C.
    Performance Evaluation Review, 2020, 48 (01): : 17 - 18
  • [43] Quantum Deep Reinforcement Learning Based on Episodic Memory
    Zhu X.
    Hou X.
    Wu S.
    Zhu F.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2022, 51 (02): : 170 - 175
  • [44] On the Importance of Exploration for Generalization in Reinforcement Learning
    Jiang, Yiding
    Kolter, J. Zico
    Raileanu, Roberta
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Exploration in deep reinforcement learning: A survey
    Ladosz, Pawel
    Weng, Lilian
    Kim, Minwoo
    Oh, Hyondong
    INFORMATION FUSION, 2022, 85 : 1 - 22
  • [46] Distributional Reinforcement Learning for Efficient Exploration
    Mavrin, Borislav
    Yao, Hengshuai
    Kong, Linglong
    Wu, Kaiwen
    Yu, Yaoliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [47] Adaptive Exploration Strategies for Reinforcement Learning
    Hwang, Kao-Shing
    Li, Chih-Wen
    Jiang, Wei-Cheng
    2017 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2017, : 16 - 19
  • [48] Uncertainty Quantification and Exploration for Reinforcement Learning
    Zhu, Yi
    Dong, Jing
    Lam, Henry
    OPERATIONS RESEARCH, 2024, 72 (04) : 1689 - 1709
  • [49] Coordinated Exploration in Concurrent Reinforcement Learning
    Dimakopoulou, Maria
    Van Roy, Benjamin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [50] Overcoming Exploration in Reinforcement Learning with Demonstrations
    Nair, Ashvin
    McGrew, Bob
    Andrychowicz, Marcin
    Zaremba, Wojciech
    Abbeel, Pieter
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 6292 - 6299