Suppr超能文献

利用人工智能学习阿尔茨海默病的最佳治疗方案计划。

Using Artificial Intelligence to Learn Optimal Regimen Plan for Alzheimer's Disease.

作者信息

Bhattarai Kritib, Das Trisha, Kim Yejin, Chen Yongbin, Dai Qiying, Li Xiaoyang, Jiang Xiaoqian, Zong Nansu

机构信息

Department of Computer Science, Luther College Decorah, IA, United States.

Department of Computer Science, University of Illinois Urbana-Champaign Champaign, Champaign, IL, United States.

出版信息

medRxiv. 2023 Jan 29:2023.01.26.23285064. doi: 10.1101/2023.01.26.23285064.

Abstract

BACKGROUND

Alzheimer's Disease (AD) is a progressive neurological disorder with no specific curative medications. While only a few medications are approved by FDA (i.e., donepezil, galantamine, rivastigmine, and memantine) to relieve symptoms (e.g., cognitive decline), sophisticated clinical skills are crucial to optimize the appropriate regimens given the multiple coexisting comorbidities in this patient population.

OBJECTIVE

Here, we propose a study to leverage reinforcement learning (RL) to learn the clinicians' decisions for AD patients based on the longitude records from Electronic Health Records (EHR).

METHODS

In this study, we withdraw 1,736 patients fulfilling our criteria, from the Alzheimer's Disease Neuroimaging Initiative(ADNI) database. We focused on the two most frequent concomitant diseases, depression, and hypertension, thus resulting in five main cohorts, 1) whole data, 2) AD-only, 3) AD-hypertension, 4) AD-depression, and 5) AD-hypertension-depression. We modeled the treatment learning into an RL problem by defining the three factors (i.e., states, action, and reward) in RL in multiple strategies, where a regression model and a decision tree are developed to generate states, six main medications extracted (i.e., no drugs, cholinesterase inhibitors, memantine, hypertension drugs, a combination of cholinesterase inhibitors and memantine, and supplements or other drugs) are for action, and Mini-Mental State Exam (MMSE) scores are for reward.

RESULTS

Given the proper dataset, the RL model can generate an optimal policy (regimen plan) that outperforms the clinician's treatment regimen. With the smallest data samples, the optimal-policy (i.e., policy iteration and Q-learning) gained a lesser reward than the clinician's policy (mean -2.68 and -2.76 . -2.66, respectively), but it gained more reward once the data size increased (mean -3.56 and -2.48 . -3.57, respectively).

CONCLUSIONS

Our results highlight the potential of using RL to generate the optimal treatment based on the patients' longitude records. Our work can lead the path toward the development of RL-based decision support systems which could facilitate the daily practice to manage Alzheimer's disease with comorbidities.

摘要

背景

阿尔茨海默病(AD)是一种进行性神经疾病,尚无特效治疗药物。虽然美国食品药品监督管理局(FDA)仅批准了少数几种药物(即多奈哌齐、加兰他敏、卡巴拉汀和美金刚)来缓解症状(如认知衰退),但鉴于该患者群体存在多种合并症,精湛的临床技能对于优化合适的治疗方案至关重要。

目的

在此,我们提议开展一项研究,利用强化学习(RL),根据电子健康记录(EHR)中的纵向记录来了解临床医生对AD患者的决策。

方法

在本研究中,我们从阿尔茨海默病神经影像倡议(ADNI)数据库中提取了1736名符合我们标准的患者。我们重点关注两种最常见的伴随疾病,即抑郁症和高血压,从而形成了五个主要队列,1)全数据,2)仅患AD,3)AD合并高血压,4)AD合并抑郁症,5)AD合并高血压和抑郁症。我们通过在多种策略中定义强化学习中的三个因素(即状态、行动和奖励),将治疗学习建模为一个强化学习问题,其中开发了一个回归模型和一个决策树来生成状态,提取六种主要药物(即未用药、胆碱酯酶抑制剂、美金刚、高血压药物、胆碱酯酶抑制剂与美金刚的组合,以及补充剂或其他药物)用于行动,简易精神状态检查表(MMSE)评分用于奖励。

结果

在有合适数据集的情况下,强化学习模型可以生成优于临床医生治疗方案的最优策略(治疗方案计划)。在数据样本最少时,最优策略(即策略迭代和Q学习)获得的奖励低于临床医生的策略(均值分别为-2.68和-2.76对-2.66),但随着数据量增加,它获得的奖励更多(均值分别为-3.56和-2.48对-3.57)。

结论

我们的结果突出了利用强化学习根据患者纵向记录生成最优治疗方案的潜力。我们的工作可为基于强化学习的决策支持系统的开发指明方向,该系统可促进日常实践中对合并症阿尔茨海默病的管理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/902d/9901063/eef6e811862e/nihpp-2023.01.26.23285064v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验