Corruption-Robust Exploration in Episodic Reinforcement Learning

被引:0
作者
Lykouris, Thodoris [1 ]
Simchowitz, Max [2 ]
Slivkins, Aleksandrs [3 ]
Sun, Wen [4 ]
机构
[1] MIT, Sloan Sch Management, Cambridge, MA 02139 USA
[2] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[3] Microsoft Res Lab, New York, NY 10012 USA
[4] Cornell Univ, Dept Comp Sci, Ithaca, NY 14850 USA
关键词
reinforcement learning; bandit feedback; exploration; robustness; regret; MULTIARMED BANDIT;
D O I
10.1287/moor.2021.0202
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We initiate the study of episodic reinforcement learning (RL) under adversarial corruptions in both the rewards and the transition probabilities of the underlying system, extending recent results for the special case of multiarmed bandits. We provide a framework that modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on optimism in the face of uncertainty by complementing them with principles from action elimination. Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms that (a) attain near -optimal regret in the absence of corruptions and (b) adapt to unknown levels of corruption, enjoying regret guarantees that degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) and linear Markov decision process settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee that accommodates any deviation from purely independent and identically distributed transitions in the bandit -feedback model for episodic reinforcement learning.
引用
收藏
页数:29
相关论文
共 50 条
  • [31] EFFICIENT AND STABLE INFORMATION DIRECTED EXPLORATION FOR CONTINUOUS REINFORCEMENT LEARNING
    Chen, Mingzhe
    Xiao, Xi
    Zhang, Wanpeng
    Gao, Xiaotian
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4023 - 4027
  • [32] NAEM: Noisy Attention Exploration Module for Deep Reinforcement Learning
    Cai, Zhenwen
    Lee, Feifei
    Hu, Chunyan
    Kotani, Koji
    Chen, Qiu
    IEEE ACCESS, 2021, 9 : 154600 - 154611
  • [33] Efficient exploration in reinforcement learning based on utile suffix memory
    Pchelkin, A
    INFORMATICA, 2003, 14 (02) : 237 - 250
  • [34] Towards robust shielded reinforcement learning through adaptive constraints and exploration: The fear field framework
    Odriozola-Olalde, Haritz
    Zamalloa, Maider
    Arana-Arexolaleiba, Nestor
    Perez-Cerrolaza, Jon
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 144
  • [35] A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning
    Garcia, Francisco M.
    Thomas, Philip S.
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1976 - 1978
  • [36] Exploration in Relational Domains for Model-based Reinforcement Learning
    Lang, Tobias
    Toussaint, Marc
    Kersting, Kristian
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 3725 - 3768
  • [37] Chaotic exploration effects on reinforcement learning in shortcut maze task
    Morihiro, Koichiro
    Matsui, Nobuyuki
    Nishimura, Haruhiko
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 2006, 16 (10): : 3015 - 3022
  • [38] Reinforcement Learning Exploration Algorithms for Energy Harvesting Communications Systems
    Masadeh, Ala'eddin
    Wang, Zhengdao
    Kamal, Ahmed E.
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2018,
  • [39] Learning of deterministic exploration and temporal abstraction in reinforcement learning
    Shibata, Katsunari
    2006 SICE-ICASE International Joint Conference, Vols 1-13, 2006, : 2212 - 2217
  • [40] Robust Approximation in Decomposed Reinforcement Learning
    Mori, Takeshi
    Ishii, Shin
    NEURAL INFORMATION PROCESSING, PT 1, PROCEEDINGS, 2009, 5863 : 590 - 597