Suppr超能文献

使用深度强化学习学习术中低血压的最佳治疗策略。

Learning optimal treatment strategies for intraoperative hypotension using deep reinforcement learning.

作者信息

Adiyeke Esra, Liu Tianqi, Naganaboina Venkata Sai Dheeraj, Li Han, Loftus Tyler J, Ren Yuanfang, Shickel Benjamin, Ruppert Matthew M, Singh Karandeep, Fang Ruogu, Rashidi Parisa, Bihorac Azra, Ozrazgat-Baslanti Tezcan

机构信息

Intelligent Clinical Care Center (IC), University of Florida, Gainesville, FL.

Department of Medicine, Division of Nephrology, Hypertension, and Renal Transplantation, University of Florida, Gainesville, FL.

出版信息

ArXiv. 2025 May 27:arXiv:2505.21596v1.

Abstract

IMPORTANCE

Traditional methods of surgical decision making heavily rely on human experience and prompt actions, which are variable. A data-driven system that generates treatment recommendations based on patient states can be a substantial asset in perioperative decision-making, as in cases of intraoperative hypotension, for which suboptimal management is associated with acute kidney injury (AKI), a common and morbid postoperative complication.

OBJECTIVE

To develop a Reinforcement Learning (RL) model to recommend optimum dose of intravenous (IV) fluid and vasopressors during surgery to avoid intraoperative hypotension and postoperative AKI.

DESIGN SETTING PARTICIPANTS

We retrospectively analyzed 50,021 surgeries from 42,547 adult patients who underwent major surgery at a quaternary care hospital between June 2014 and September 2020. Of these, 34,186 surgeries were used for model training and internal validation while 15,835 surgeries were reserved for testing. We developed an RL model based on Deep Q-Networks to provide optimal treatment suggestions.

EXPOSURES

Demographic and baseline clinical characteristics, intraoperative physiologic time series, and total dose of IV fluid and vasopressors were extracted every 15-minutes during the surgery.

MAIN OUTCOMES

In the RL model, intraoperative hypotension (MAP<65 mmHg) and AKI in the first three days following the surgery were considered.

RESULTS

The developed model replicated 69% of physician's decisions for the dosage of vasopressors and proposed higher or lower dosage of vasopressors than received in 10% and 21% of the treatments, respectively. In terms of intravenous fluids, the model's recommendations were within 0.05 ml/kg/15 min of the actual dose in 41% of the cases, with higher or lower doses recommended for 27% and 32% of the treatments, respectively. The RL policy resulted in a higher estimated policy value compared to the physicians' actual treatments, as well as random policies and zero-drug policies. The prevalence of AKI was lowest in the patients who received medication dosages that aligned with our agent model's decisions.

CONCLUSIONS AND RELEVANCE

Our findings suggest that implementation of the model's policy has the potential to reduce postoperative AKI and improve other outcomes driven by intraoperative hypotension.

摘要

重要性

传统的手术决策方法严重依赖人类经验和即时行动,这些都存在变数。一个基于患者状态生成治疗建议的数据驱动系统在围手术期决策中可能是一项重要资产,比如在术中低血压的情况下,这种情况管理不当会导致急性肾损伤(AKI),这是一种常见且严重的术后并发症。

目的

开发一种强化学习(RL)模型,以推荐手术期间静脉注射(IV)液体和血管加压药的最佳剂量,避免术中低血压和术后AKI。

设计、设置、参与者:我们回顾性分析了2014年6月至2020年9月期间在一家四级医疗机构接受大手术的42547名成年患者的50021例手术。其中,34186例手术用于模型训练和内部验证,15835例手术留作测试。我们基于深度Q网络开发了一个RL模型,以提供最佳治疗建议。

暴露因素

在手术期间,每15分钟提取一次人口统计学和基线临床特征、术中生理时间序列以及IV液体和血管加压药的总剂量。

主要结局

在RL模型中,考虑术中低血压(平均动脉压<65 mmHg)和术后头三天的AKI。

结果

所开发的模型复制了医生关于血管加压药剂量决策的69%,分别在10%和21%的治疗中提出了比实际接受剂量更高或更低的血管加压药剂量。就静脉输液而言,在41%的病例中,模型的建议与实际剂量相差在0.05 ml/kg/15分钟以内,分别在27%和32%的治疗中建议了更高或更低的剂量。与医生的实际治疗以及随机策略和零药物策略相比,RL策略产生了更高的估计策略价值。接受与我们的智能体模型决策一致的药物剂量的患者中,AKI的发生率最低。

结论与相关性

我们的研究结果表明,实施该模型的策略有可能降低术后AKI,并改善由术中低血压驱动的其他结局。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a81/12148086/29eb9141259b/nihpp-2505.21596v1-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验