Suppr超能文献

用于估计最优动态治疗方案的新统计学习方法。

New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes.

作者信息

Zhao Ying-Qi, Zeng Donglin, Laber Eric B, Kosorok Michael R

机构信息

Assistant Professor, Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, WI 53792.

Professor, Department of Biostatistics, University of North Carolina at Chapel Hill, NC 27599.

出版信息

J Am Stat Assoc. 2015;110(510):583-598. doi: 10.1080/01621459.2014.937488.

Abstract

Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example -learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with -learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation.

摘要

动态治疗方案(DTRs)是针对个体患者的序贯决策规则,可随时间适应不断演变的疾病。目标是适应患者之间的异质性,并找到如果实施将产生最佳长期结果的DTR。我们引入了两种新的统计学习方法来估计最优DTR,称为反向结果加权学习(BOWL)和同步结果加权学习(SOWL)。这些方法将个体化治疗选择转化为序贯或同步分类问题,因此可以通过修改现有的机器学习技术来应用。所提出的方法基于在所有DTR上直接最大化预期长期结果的非参数估计器;这与基于回归的方法(例如学习)有根本不同,基于回归的方法间接尝试这种最大化并且严重依赖于假设回归模型的正确性。我们证明所得规则是一致的,并使用估计规则为误差提供有限样本界限。模拟结果表明,与学习相比,所提出的方法产生了更优的DTR,特别是在小样本中。我们使用戒烟临床试验的数据来说明这些方法。

相似文献

1
New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes.
J Am Stat Assoc. 2015;110(510):583-598. doi: 10.1080/01621459.2014.937488.
3
Adaptive contrast weighted learning for multi-stage multi-treatment decision-making.
Biometrics. 2017 Mar;73(1):145-155. doi: 10.1111/biom.12539. Epub 2016 May 23.
4
TREE-BASED REINFORCEMENT LEARNING FOR ESTIMATING OPTIMAL DYNAMIC TREATMENT REGIMES.
Ann Appl Stat. 2018 Sep;12(3):1914-1938. doi: 10.1214/18-AOAS1137. Epub 2018 Sep 11.
5
Estimating Individualized Treatment Rules Using Outcome Weighted Learning.
J Am Stat Assoc. 2012 Sep 1;107(449):1106-1118. doi: 10.1080/01621459.2012.695674.
6
Augmented outcome-weighted learning for estimating optimal dynamic treatment regimens.
Stat Med. 2018 Nov 20;37(26):3776-3788. doi: 10.1002/sim.7844. Epub 2018 Jun 5.
7
Q-learning for estimating optimal dynamic treatment rules from observational data.
Can J Stat. 2012 Dec 1;40(4):629-645. doi: 10.1002/cjs.11162. Epub 2012 Nov 7.
8
Dynamic Treatment Regimes Using Bayesian Additive Regression Trees for Censored Outcomes.
Lifetime Data Anal. 2024 Jan;30(1):181-212. doi: 10.1007/s10985-023-09605-8. Epub 2023 Sep 2.
9
Bayesian inference for optimal dynamic treatment regimes in practice.
Int J Biostat. 2023 May 17;19(2):309-331. doi: 10.1515/ijb-2022-0073. eCollection 2023 Nov 1.

引用本文的文献

1
Controlling Cumulative Adverse Risk in Learning Optimal Dynamic Treatment Regimens.
J Am Stat Assoc. 2024;119(548):2622-2633. doi: 10.1080/01621459.2023.2270637. Epub 2023 Dec 11.
3
Simultaneous Feature Selection for Optimal Dynamic Treatment Regimens.
Stat Med. 2025 Jul;44(15-17):e70169. doi: 10.1002/sim.70169.
4
Gerontologic Biostatistics and Data Science: Aging Research in the Era of Big Data.
J Gerontol A Biol Sci Med Sci. 2024 Dec 11;80(1). doi: 10.1093/gerona/glae269.
5
Fusing Individualized Treatment Rules Using Secondary Outcomes.
Proc Mach Learn Res. 2024 May;238:712-720.
6
A Bayesian multivariate hierarchical model for developing a treatment benefit index using mixed types of outcomes.
BMC Med Res Methodol. 2024 Sep 27;24(1):218. doi: 10.1186/s12874-024-02333-z.
8
Learning optimal dynamic treatment regimes from longitudinal data.
Am J Epidemiol. 2024 Dec 2;193(12):1768-1775. doi: 10.1093/aje/kwae122.
9
Machine Learning and Health Science Research: Tutorial.
J Med Internet Res. 2024 Jan 30;26:e50890. doi: 10.2196/50890.

本文引用的文献

1
Reinforced Angle-based Multicategory Support Vector Machines.
J Comput Graph Stat. 2016;25(3):806-825. doi: 10.1080/10618600.2015.1043010. Epub 2016 Aug 5.
2
Q- and A-learning Methods for Estimating Optimal Dynamic Treatment Regimes.
Stat Sci. 2014 Nov;29(4):640-661. doi: 10.1214/13-STS450.
4
Estimating Optimal Treatment Regimes from a Classification Perspective.
Stat. 2012 Jan 1;1(1):103-114. doi: 10.1002/sta.411.
5
Estimating Individualized Treatment Rules Using Outcome Weighted Learning.
J Am Stat Assoc. 2012 Sep 1;107(449):1106-1118. doi: 10.1080/01621459.2012.695674.
6
Q-learning: a data analysis method for constructing adaptive interventions.
Psychol Methods. 2012 Dec;17(4):478-94. doi: 10.1037/a0029373. Epub 2012 Oct 1.
7
Q-LEARNING WITH CENSORED DATA.
Ann Stat. 2012 Feb 1;40(1):529-560. doi: 10.1214/12-AOS968.
8
A robust method for estimating optimal treatment regimes.
Biometrics. 2012 Dec;68(4):1010-8. doi: 10.1111/j.1541-0420.2012.01763.x. Epub 2012 May 2.
10
Informing sequential clinical decision-making through reinforcement learning: an empirical study.
Mach Learn. 2011 Jul 1;84(1-2):109-136. doi: 10.1007/s10994-010-5229-0.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验