Learning Optimal Dynamic Treatment Regimens Subject to Stagewise Risk Controls

被引:0
|
作者
Liu, Mochuan [1 ]
Wang, Yuanjia [2 ]
Fu, Haoda [3 ]
Zeng, Donglin [4 ]
机构
[1] Univ North Carolina Chapel Hill, Dept Biostat, Chapel Hill, NC 27599 USA
[2] Columbia Univ, Dept Biostat, New York, NY 10032 USA
[3] Eli Lilly & Co, Indianapolis, IN 46285 USA
[4] Univ Michigan, Dept Biostat, Ann Arbor, MI 48109 USA
关键词
Dynamic treatment regimens; Precision medicine; Benefit-risk tradeoff; Acute adverse events; Weighted support vector machine; INDIVIDUALIZED TREATMENT RULES; DESIGN; REGRET; INFERENCE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dynamic treatment regimens (DTRs) aim at tailoring individualized sequential treatment rules that maximize cumulative beneficial outcomes by accommodating patients' heterogeneity in decision -making. For many chronic diseases including type 2 diabetes mellitus (T2D), treatments are usually multifaceted in the sense that aggressive treatments with a higher expected reward are also likely to elevate the risk of acute adverse events. In this paper, we propose a new weighted learning framework, namely benefit -risk dynamic treatment regimens (BR-DTRs), to address the benefit -risk trade-off. The new framework relies on a backward learning procedure by restricting the induced risk of the treatment rule to be no larger than a pre -specified risk constraint at each treatment stage. Computationally, the estimated treatment rule solves a weighted support vector machine problem with a modified smooth constraint. Theoretically, we show that the proposed DTRs are Fisher consistent, and we further obtain the convergence rates for both the value and risk functions. Finally, the performance of the proposed method is demonstrated via extensive simulation studies and application to a real study for T2D patients.
引用
收藏
页数:64
相关论文
共 50 条
  • [1] Controlling Cumulative Adverse Risk in Learning Optimal Dynamic Treatment Regimens
    Liu, Mochuan
    Wang, Yuanjia
    Fu, Haoda
    Zeng, Donglin
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (548) : 2622 - 2633
  • [2] Augmented outcome-weighted learning for estimating optimal dynamic treatment regimens
    Liu, Ying
    Wang, Yuanjia
    Kosorok, Michael R.
    Zhao, Yingqi
    Zeng, Donglin
    STATISTICS IN MEDICINE, 2018, 37 (26) : 3776 - 3788
  • [3] New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
    Zhao, Ying-Qi
    Zeng, Donglin
    Laber, Eric B.
    Kosorok, Michael R.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2015, 110 (510) : 583 - 598
  • [4] PENALIZED Q-LEARNING FOR DYNAMIC TREATMENT REGIMENS
    Song, Rui
    Wang, Weiwei
    Zeng, Donglin
    Kosorok, Michael R.
    STATISTICA SINICA, 2015, 25 (03) : 901 - 920
  • [5] C-learning: A new classification framework to estimate optimal dynamic treatment regimes
    Zhang, Baqun
    Zhang, Min
    BIOMETRICS, 2018, 74 (03) : 891 - 899
  • [6] Learning optimal dynamic treatment regimes from longitudinal data
    Williams, Nicholas T.
    Hoffman, Katherine L.
    Diaz, Ivan
    Rudolph, Kara E.
    AMERICAN JOURNAL OF EPIDEMIOLOGY, 2024, 193 (12) : 1768 - 1775
  • [7] Q- and A-Learning Methods for Estimating Optimal Dynamic Treatment Regimes
    Schulte, Phillip J.
    Tsiatis, Anastasios A.
    Laber, Eric B.
    Davidian, Marie
    STATISTICAL SCIENCE, 2014, 29 (04) : 640 - 661
  • [8] Differentially private outcome-weighted learning for optimal dynamic treatment regime estimation
    Spicker, Dylan
    Moodie, Erica E. M.
    Shortreed, Susan M.
    STAT, 2024, 13 (01):
  • [9] Learning and Assessing Optimal Dynamic Treatment Regimes Through Cooperative Imitation Learning
    Shah, Syed Ihtesham Hussain
    Coronato, Antonio
    Naeem, Muddasar
    De Pietro, Giuseppe
    IEEE ACCESS, 2022, 10 : 78148 - 78158
  • [10] Near-Optimal Reinforcement Learning in Dynamic Treatment Regimes
    Zhang, Junzhe
    Bareinboim, Elias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32