Near-Optimal Model-Free Reinforcement Learning in Non-Stationary Episodic MDPs

被引:0
|
作者
Mao, Weichao [1 ,2 ]
Zhang, Kaiqing [1 ,2 ]
Zhu, Ruihao [3 ]
Simchi-Levi, David [3 ]
Basar, Tamer [1 ,2 ]
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
[2] Univ Illinois, Coordinated Sci Lab, Urbana, IL 61801 USA
[3] MIT, Inst Data Syst & Soc, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of (O) over tilde (S-1/3 A(1/3) Delta(1/3) HT2/3), where S and A are the numbers of states and actions, respectively, Delta > 0 is the variation budget, H is the number of time steps per episode, and T is the total number of time steps. We further show that our algorithm is nearly optimal by establishing an information-theoretical lower bound of Omega (S-1/3 A(1/3) Delta(1/3) HT2/3 T-2/3), the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We further demonstrate the power of our results in the context of multi-agent RL, where non-stationarity is a key challenge.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Near-Optimal Reinforcement Learning with Self-Play
    Bai, Yu
    Jin, Chi
    Yu, Tiancheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [32] Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs
    Tirinzoni, Andrea
    Al-Marjani, Aymen
    Kaufmann, Emilie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [33] Adaptive deep reinforcement learning for non-stationary environments
    Jin ZHU
    Yutong WEI
    Yu KANG
    Xiaofeng JIANG
    Geir E.DULLERUD
    Science China(Information Sciences), 2022, 65 (10) : 225 - 241
  • [34] Choosing search heuristics by non-stationary reinforcement learning
    Nareyek, A
    METAHEURISTICS: COMPUTER DECISION-MAKING, 2004, 86 : 523 - +
  • [35] Adaptive deep reinforcement learning for non-stationary environments
    Jin Zhu
    Yutong Wei
    Yu Kang
    Xiaofeng Jiang
    Geir E. Dullerud
    Science China Information Sciences, 2022, 65
  • [36] Adaptive deep reinforcement learning for non-stationary environments
    Zhu, Jin
    Wei, Yutong
    Kang, Yu
    Jiang, Xiaofeng
    Dullerud, Geir E.
    SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (10)
  • [37] Model-Free Representation Learning and Exploration in Low-Rank MDPs
    Modi, Aditya
    Chen, Jinglin
    Krishnamurthy, Akshay
    Jiang, Nan
    Agarwal, Alekh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 76
  • [38] Model-Free Non-Stationarity Detection and Adaptation in Reinforcement Learning
    Canonaco, Giuseppe
    Restelli, Marcello
    Roveri, Manuel
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1047 - 1054
  • [39] Near-optimal Trajectory Tracking in Quadcopters using Reinforcement Learning
    Engelhardt, Randal
    Velazquez, Alberto
    Sardarmehni, Tohid
    IFAC PAPERSONLINE, 2024, 58 (28): : 61 - 65
  • [40] Polynomial-time reinforcement learning of near-optimal policies
    Pivazyan, K
    Shoham, Y
    EIGHTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-02)/FOURTEENTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE (IAAI-02), PROCEEDINGS, 2002, : 205 - 210