• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Markov decision processes: a tool for sequential decision making under uncertainty.马尔可夫决策过程:一种在不确定性下进行序贯决策的工具。
Med Decis Making. 2010 Jul-Aug;30(4):474-83. doi: 10.1177/0272989X09353194. Epub 2009 Dec 31.
2
Sensitivity Analysis in Sequential Decision Models.序贯决策模型中的敏感性分析
Med Decis Making. 2017 Feb;37(2):243-252. doi: 10.1177/0272989X16670605. Epub 2016 Sep 29.
3
Markov models for clinical decision-making in radiation oncology: A systematic review.用于放射肿瘤学临床决策的马尔可夫模型:系统评价。
J Med Imaging Radiat Oncol. 2024 Aug;68(5):610-623. doi: 10.1111/1754-9485.13656. Epub 2024 May 20.
4
A Promising Approach to Optimizing Sequential Treatment Decisions for Depression: Markov Decision Process.优化抑郁症序贯治疗决策的一种有前途的方法:马尔可夫决策过程。
Pharmacoeconomics. 2022 Nov;40(11):1015-1032. doi: 10.1007/s40273-022-01185-z. Epub 2022 Sep 14.
5
Probabilistic sensitivity analysis on Markov models with uncertain transition probabilities: an application in evaluating treatment decisions for type 2 diabetes.具有不确定转移概率的马尔可夫模型的概率敏感性分析:在评估 2 型糖尿病治疗决策中的应用。
Health Care Manag Sci. 2019 Mar;22(1):34-52. doi: 10.1007/s10729-017-9420-8. Epub 2017 Oct 27.
6
Relativized hierarchical decomposition of Markov decision processes.马尔可夫决策过程的相对层次分解。
Prog Brain Res. 2013;202:465-88. doi: 10.1016/B978-0-444-62604-2.00023-X.
7
Planning treatment of ischemic heart disease with partially observable Markov decision processes.运用部分可观测马尔可夫决策过程规划缺血性心脏病的治疗
Artif Intell Med. 2000 Mar;18(3):221-44. doi: 10.1016/s0933-3657(99)00042-1.
8
Optimization of anemia treatment in hemodialysis patients via reinforcement learning.通过强化学习优化血液透析患者的贫血治疗。
Artif Intell Med. 2014 Sep;62(1):47-60. doi: 10.1016/j.artmed.2014.07.004. Epub 2014 Jul 19.
9
Quantile Markov Decision Processes.分位数马尔可夫决策过程
Oper Res. 2022 May-Jun;70(3):1428-1447. doi: 10.1287/opre.2021.2123. Epub 2021 Nov 9.
10
Partially observable Markov decision processes and performance sensitivity analysis.部分可观测马尔可夫决策过程与性能灵敏度分析。
IEEE Trans Syst Man Cybern B Cybern. 2008 Dec;38(6):1645-51. doi: 10.1109/TSMCB.2008.927711.

引用本文的文献

1
Optimizing Vital Signs in Patients With Traumatic Brain Injury: Reinforcement Learning Algorithm Development and Validation.优化创伤性脑损伤患者的生命体征:强化学习算法的开发与验证
J Med Internet Res. 2025 Jul 3;27:e63847. doi: 10.2196/63847.
2
Cost-effectiveness of personalized policies for implementing organ-at-risk sparing adaptive radiation therapy in head and neck cancer.头颈部癌实施危及器官保留自适应放射治疗个性化策略的成本效益
Phys Imaging Radiat Oncol. 2025 May 6;34:100772. doi: 10.1016/j.phro.2025.100772. eCollection 2025 Apr.
3
Quickest way to less headache days: an operational research model and its implementation for chronic migraine.减少头痛天数的最快方法:一种针对慢性偏头痛的运筹学模型及其应用
BMC Neurol. 2025 Mar 31;25(1):132. doi: 10.1186/s12883-025-04124-5.
4
Optimal timing of organs-at-risk-sparing adaptive radiation therapy for head-and-neck cancer under re-planning resource constraints.在重新计划资源受限的情况下,头颈部癌危及器官保留型自适应放射治疗的最佳时机
Phys Imaging Radiat Oncol. 2025 Jan 27;33:100715. doi: 10.1016/j.phro.2025.100715. eCollection 2025 Jan.
5
Optimal Timing of Organs-at-Risk-Sparing Adaptive Radiation Therapy for Head- and-Neck Cancer under Re-planning Resource Constraints.在重新计划资源限制下,头颈部癌危及器官保留自适应放射治疗的最佳时机
medRxiv. 2024 Nov 4:2024.04.01.24305163. doi: 10.1101/2024.04.01.24305163.
6
PrescDRL: deep reinforcement learning for herbal prescription planning in treatment of chronic diseases.PrescDRL:用于慢性病治疗中中药方剂规划的深度强化学习
Chin Med. 2024 Oct 16;19(1):144. doi: 10.1186/s13020-024-01005-w.
7
Markov models for clinical decision-making in radiation oncology: A systematic review.用于放射肿瘤学临床决策的马尔可夫模型:系统评价。
J Med Imaging Radiat Oncol. 2024 Aug;68(5):610-623. doi: 10.1111/1754-9485.13656. Epub 2024 May 20.
8
An Application of Inverse Reinforcement Learning to Estimate Interference in Drone Swarms.逆强化学习在估计无人机群干扰中的应用。
Entropy (Basel). 2022 Sep 27;24(10):1364. doi: 10.3390/e24101364.
9
A Promising Approach to Optimizing Sequential Treatment Decisions for Depression: Markov Decision Process.优化抑郁症序贯治疗决策的一种有前途的方法:马尔可夫决策过程。
Pharmacoeconomics. 2022 Nov;40(11):1015-1032. doi: 10.1007/s40273-022-01185-z. Epub 2022 Sep 14.
10
Autonomous Rear Parking via Rapidly Exploring Random-Tree-Based Reinforcement Learning.基于快速探索随机树的强化学习的自主倒车入库。
Sensors (Basel). 2022 Sep 2;22(17):6655. doi: 10.3390/s22176655.

本文引用的文献

1
Motion Planning Under Uncertainty for Image-guided Medical Needle Steering.用于图像引导医疗针头转向的不确定性下的运动规划
Int J Rob Res. 2008;27(11-12):1361-1374. doi: 10.1177/0278364908097661.
2
Optimizing the start time of statin therapy for patients with diabetes.优化糖尿病患者他汀类药物治疗的起始时间。
Med Decis Making. 2009 May-Jun;29(3):351-67. doi: 10.1177/0272989X08329462. Epub 2009 May 8.
3
Incorporating biological natural history in simulation models: empirical estimates of the progression of end-stage liver disease.将生物自然史纳入模拟模型:终末期肝病进展的实证估计
Med Decis Making. 2005 Nov-Dec;25(6):620-32. doi: 10.1177/0272989X05282719.
4
Survival after liver transplantation in the United States: a disease-specific analysis of the UNOS database.美国肝移植后的生存率:对器官共享联合网络(UNOS)数据库的疾病特异性分析
Liver Transpl. 2004 Jul;10(7):886-97. doi: 10.1002/lt.20137.
5
Planning treatment of ischemic heart disease with partially observable Markov decision processes.运用部分可观测马尔可夫决策过程规划缺血性心脏病的治疗
Artif Intell Med. 2000 Mar;18(3):221-44. doi: 10.1016/s0933-3657(99)00042-1.
6
Optimal control of a birth and death epidemic process.生死型传染病过程的最优控制
Oper Res. 1981 Sep-Oct;29(5):971-82. doi: 10.1287/opre.29.5.971.
7
Primer on medical decision analysis: Part 1--Getting started.医学决策分析入门:第1部分——入门指南。
Med Decis Making. 1997 Apr-Jun;17(2):123-5. doi: 10.1177/0272989X9701700201.
8
The Markov process in medical prognosis.医学预后中的马尔可夫过程。
Med Decis Making. 1983;3(4):419-458. doi: 10.1177/0272989X8300300403.

马尔可夫决策过程:一种在不确定性下进行序贯决策的工具。

Markov decision processes: a tool for sequential decision making under uncertainty.

机构信息

Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA.

出版信息

Med Decis Making. 2010 Jul-Aug;30(4):474-83. doi: 10.1177/0272989X09353194. Epub 2009 Dec 31.

DOI:10.1177/0272989X09353194
PMID:20044582
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3060044/
Abstract

We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

摘要

我们提供了一个关于构建和评估马尔可夫决策过程(MDP)的教程,MDP 是一种在不确定性下进行序贯决策的强大分析工具,已广泛应用于许多工业和制造业应用中,但在医疗决策(MDM)中未得到充分利用。我们演示了如何使用 MDP 来解决不确定情况下的序贯临床治疗问题。MDP 相对于标准的基于马尔可夫的模拟模型具有显著优势。MDP 在模型中嵌入了决策过程,并且可以随着时间的推移进行多次决策。我们通过使用这两种方法解决活体供肝移植的最佳时机问题,将 MDP 与标准基于马尔可夫的模拟模型进行了比较。两种模型都得到了相同的最优移植策略和相同的患者和活体供者的总预期寿命。解决 MDP 模型的计算时间明显小于解决马尔可夫模型的计算时间。我们简要描述了应用于医疗决策的 MDP 不断增长的文献。