PROJECTED STATE-ACTION BALANCING WEIGHTS FOR OFFLINE REINFORCEMENT LEARNING

被引:1
|
作者
Wang, Jiayi [1 ]
Qi, Zhengling [2 ]
Wong, Raymond K. W. [3 ]
机构
[1] Univ Texas Dallas, Dept Math Sci, Richardson, TX 75083 USA
[2] George Washington Univ, Dept Decis Sci, Washington, DC 20052 USA
[3] Texas A&M Univ, Dept Stat, College Stn, TX 77843 USA
基金
美国国家科学基金会;
关键词
Infinite horizons; Markov decision process; Policy evaluation; Reinforcement learning; DYNAMIC TREATMENT REGIMES; RATES; CONVERGENCE; INFERENCE;
D O I
10.1214/23-AOS2302
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Off-policy evaluation is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights, and show that the proposed value estimator is asymptotically normal under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the operator that relates to the nonparametric Q-function estimation in the off-policy setting, which characterizes the difficulty of Q-function estimation and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
引用
收藏
页码:1639 / 1665
页数:27
相关论文
共 50 条
  • [1] A REINFORCEMENT LEARNING MODEL USING DETERMINISTIC STATE-ACTION SEQUENCES
    Murata, Makoto
    Ozawa, Seiichi
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2010, 6 (02): : 577 - 590
  • [2] Efficient Reinforcement Learning Using State-Action Uncertainty with Multiple Heads
    Aizu, Tomoharu
    Oba, Takeru
    Ukita, Norimichi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VIII, 2023, 14261 : 184 - 196
  • [3] Jointly-Learned State-Action Embedding for Efficient Reinforcement Learning
    Pritz, Paul J.
    Ma, Liang
    Leung, Kin K.
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1447 - 1456
  • [4] Model-Based Reinforcement Learning Exploiting State-Action Equivalence
    Asadi, Mahsa
    Talebi, Mohammad Sadegh
    Bourel, Hippolyte
    Maillard, Odalric-Ambrym
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 204 - 219
  • [5] Swarm Reinforcement Learning Methods for Problems with Continuous State-Action Space
    Iima, Hitoshi
    Kuroe, Yasuaki
    Emoto, Kazuo
    2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 2173 - 2180
  • [6] Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons
    Shi, Chengchun
    Luo, Shikai
    Le, Yuan
    Zhu, Hongtu
    Song, Rui
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (545) : 232 - 245
  • [7] STATE-ACTION VALUE FUNCTION MODELED BY ELM IN REINFORCEMENT LEARNING FOR HOSE CONTROL PROBLEMS
    Manuel Lopez-Guede, Jose
    Fernandez-Gauna, Borja
    Grana, Manuel
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2013, 21 : 99 - 116
  • [8] Learning state-action correspondence across reinforcement learning control tasks via partially paired trajectories
    Garcia, Javier
    Rano, Inaki
    Bures, J. Miguel
    Fdez-Vidal, Xose R.
    Iglesias, Roberto
    APPLIED INTELLIGENCE, 2025, 55 (03)
  • [9] Learning Multi-Goal Dialogue Strategies Using Reinforcement Learning With Reduced State-Action Spaces
    Cuayahuitl, Heriberto
    Renals, Steve
    Lemon, Oliver
    Shimodaira, Hiroshi
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 469 - +
  • [10] Reinforcement learning in dynamic environment: abstraction of state-action space utilizing properties of the robot body and environment
    Ito, Kazuyuki
    Takeuchi, Yutaka
    ARTIFICIAL LIFE AND ROBOTICS, 2016, 21 (01) : 11 - 17