Weighted Q-learning for optimal dynamic treatment regimes with nonignorable missing covariates

被引:0
|
作者
Sun, Jian [1 ]
Fu, Bo [1 ]
Su, Li [2 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[2] Univ Cambridge, Sch Clin Med, MRC Biostat Unit, Cambridge CB2 0SR, England
基金
中国国家自然科学基金; 英国医学研究理事会;
关键词
backward-induction-induced missing pseudo-outcome; future-independent missingness; nonignorable missing data; nonresponse instrumental variable; Q-learning; sensitivity analysis;
D O I
10.1093/biomtc/ujae161
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Dynamic treatment regimes (DTRs) formalize medical decision-making as a sequence of rules for different stages, mapping patient-level information to recommended treatments. In practice, estimating an optimal DTR using observational data from electronic medical record (EMR) databases can be complicated by nonignorable missing covariates resulting from informative monitoring of patients. Since complete case analysis can provide consistent estimation of outcome model parameters under the assumption of outcome-independent missingness, Q-learning is a natural approach to accommodating nonignorable missing covariates. However, the backward induction algorithm used in Q-learning can introduce challenges, as nonignorable missing covariates at later stages can result in nonignorable missing pseudo-outcomes at earlier stages, leading to suboptimal DTRs, even if the longitudinal outcome variables are fully observed. To address this unique missing data problem in DTR settings, we propose 2 weighted Q-learning approaches where inverse probability weights for missingness of the pseudo-outcomes are obtained through estimating equations with valid nonresponse instrumental variables or sensitivity analysis. The asymptotic properties of the weighted Q-learning estimators are derived, and the finite-sample performance of the proposed methods is evaluated and compared with alternative methods through extensive simulation studies. Using EMR data from the Medical Information Mart for Intensive Care database, we apply the proposed methods to investigate the optimal fluid strategy for sepsis patients in intensive care units.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Dynamic Treatment Regimes with Replicated Observations Available for Error-Prone Covariates: A Q-Learning Approach
    Liu, Dan
    He, Wenqing
    STATISTICS IN BIOSCIENCES, 2025,
  • [2] Q-Learning in Dynamic Treatment Regimes With Misclassified Binary Outcome
    Liu, Dan
    He, Wenqing
    STATISTICS IN MEDICINE, 2024, 43 (30) : 5885 - 5897
  • [3] Accommodating misclassification effects on optimizing dynamic treatment regimes with Q-learning
    Charvadeh, Yasin Khadem
    Yi, Grace Y.
    STATISTICS IN MEDICINE, 2024, 43 (03) : 578 - 605
  • [4] Identifying optimally cost-effective dynamic treatment regimes with a Q-learning approach
    Illenberger, Nicholas
    Spieker, Andrew J.
    Mitra, Nandita
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES C-APPLIED STATISTICS, 2023, 72 (02) : 434 - 449
  • [5] PENALIZED Q-LEARNING FOR DYNAMIC TREATMENT REGIMENS
    Song, Rui
    Wang, Weiwei
    Zeng, Donglin
    Kosorok, Michael R.
    STATISTICA SINICA, 2015, 25 (03) : 901 - 920
  • [6] Proper Inference for Value Function in High-Dimensional Q-Learning for Dynamic Treatment Regimes
    Zhu, Wensheng
    Zeng, Donglin
    Song, Rui
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2019, 114 (527) : 1404 - 1417
  • [7] Dynamic parallel machine scheduling with mean weighted tardiness objective by Q-Learning
    Zhicong Zhang
    Li Zheng
    Michael X. Weng
    The International Journal of Advanced Manufacturing Technology, 2007, 34 : 968 - 980
  • [8] Dynamic parallel machine scheduling with mean weighted tardiness objective by Q-Learning
    Zhang, Zhicong
    Zheng, Li
    Weng, Michael X.
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2007, 34 (9-10): : 968 - 980
  • [9] On-policy Q-learning for Adaptive Optimal Control
    Jha, Sumit Kumar
    Bhasin, Shubhendu
    2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2014, : 301 - 306
  • [10] Input-Decoupled Q-Learning for Optimal Control
    Minh Q. Phan
    Seyed Mahdi B. Azad
    The Journal of the Astronautical Sciences, 2020, 67 : 630 - 656