Suppr超能文献

用于重症监护病房(ICU)预后预测的可解释深度模型。

Interpretable Deep Models for ICU Outcome Prediction.

作者信息

Che Zhengping, Purushotham Sanjay, Khemani Robinder, Liu Yan

机构信息

University of Southern California, Los Angeles, CA, USA.

Children's Hospital Los Angeles, Los Angeles, CA, USA.

出版信息

AMIA Annu Symp Proc. 2017 Feb 10;2016:371-380. eCollection 2016.

Abstract

Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians.

摘要

医疗保健数据呈指数级增长,例如来自电子健康记录(EHR)的纵向数据、重症监护病房(ICU)的传感器数据等,这为发现有意义的数据驱动的疾病特征和模式提供了新机会。最近,深度学习模型已被用于许多计算表型分析和医疗保健预测任务,以实现一流的性能。然而,深度模型缺乏可解释性,而这对于在医学研究和临床决策中广泛应用至关重要。在本文中,我们介绍了一种简单而强大的知识蒸馏方法,称为可解释模仿学习,它使用梯度提升树来学习可解释模型,同时实现与深度学习模型一样强大的预测性能。针对小儿ICU急性肺损伤(ALI)数据集的实验结果表明,我们提出的方法不仅在死亡率和无呼吸机天数预测任务上优于现有方法,还能为临床医生提供可解释模型。

相似文献

1
Interpretable Deep Models for ICU Outcome Prediction.
AMIA Annu Symp Proc. 2017 Feb 10;2016:371-380. eCollection 2016.
2
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU.
J Biomed Inform. 2019 Oct;98:103269. doi: 10.1016/j.jbi.2019.103269. Epub 2019 Aug 17.
3
Mortality prediction in intensive care units (ICUs) using a deep rule-based fuzzy classifier.
J Biomed Inform. 2018 Mar;79:48-59. doi: 10.1016/j.jbi.2018.02.008. Epub 2018 Feb 19.
5
Interpretable clinical prediction via attention-based neural network.
BMC Med Inform Decis Mak. 2020 Jul 9;20(Suppl 3):131. doi: 10.1186/s12911-020-1110-7.
6
An explainable knowledge distillation method with XGBoost for ICU mortality prediction.
Comput Biol Med. 2023 Jan;152:106466. doi: 10.1016/j.compbiomed.2022.106466. Epub 2022 Dec 21.
8
Early hospital mortality prediction of intensive care unit patients using an ensemble learning approach.
Int J Med Inform. 2017 Dec;108:185-195. doi: 10.1016/j.ijmedinf.2017.10.002. Epub 2017 Oct 5.
9
An integrated LSTM-HeteroRGNN model for interpretable opioid overdose risk prediction.
Artif Intell Med. 2023 Jan;135:102439. doi: 10.1016/j.artmed.2022.102439. Epub 2022 Nov 3.
10
Benchmarking deep learning models on large healthcare datasets.
J Biomed Inform. 2018 Jul;83:112-134. doi: 10.1016/j.jbi.2018.04.007. Epub 2018 Jun 5.

引用本文的文献

3
The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review.
Digit Health. 2024 Oct 30;10:20552076241272657. doi: 10.1177/20552076241272657. eCollection 2024 Jan-Dec.
4
Explainable federated learning scheme for secure healthcare data sharing.
Health Inf Sci Syst. 2024 Sep 13;12(1):49. doi: 10.1007/s13755-024-00306-6. eCollection 2024 Dec.
6
Prototype Learning for Medical Time Series Classification via Human-Machine Collaboration.
Sensors (Basel). 2024 Apr 22;24(8):2655. doi: 10.3390/s24082655.
7
Towards Artificial Intelligence Applications in Next Generation Cytopathology.
Biomedicines. 2023 Aug 8;11(8):2225. doi: 10.3390/biomedicines11082225.
8
Status Forecasting Based on the Baseline Information Using Logistic Regression.
Entropy (Basel). 2022 Oct 17;24(10):1481. doi: 10.3390/e24101481.
10
Prognosis of COVID-19 patients using lab tests: A data mining approach.
Health Sci Rep. 2023 Jan 8;6(1):e1049. doi: 10.1002/hsr2.1049. eCollection 2023 Jan.

本文引用的文献

2
Causal Phenotype Discovery via Deep Networks.
AMIA Annu Symp Proc. 2015 Nov 5;2015:677-86. eCollection 2015.
3
Computational phenotype discovery using unsupervised feature learning over noisy, sparse, and irregular clinical data.
PLoS One. 2013 Jun 24;8(6):e66341. doi: 10.1371/journal.pone.0066341. Print 2013.
4
Bringing big data to personalized healthcare: a patient-centered framework.
J Gen Intern Med. 2013 Sep;28 Suppl 3(Suppl 3):S660-5. doi: 10.1007/s11606-013-2455-8.
5
Computational phenotyping of two-person interactions reveals differential neural response to depth-of-thought.
PLoS Comput Biol. 2012;8(12):e1002841. doi: 10.1371/journal.pcbi.1002841. Epub 2012 Dec 27.
6
Next-generation phenotyping of electronic health records.
J Am Med Inform Assoc. 2013 Jan 1;20(1):117-21. doi: 10.1136/amiajnl-2012-001145. Epub 2012 Sep 6.
8
Effect of tidal volume in children with acute hypoxemic respiratory failure.
Intensive Care Med. 2009 Aug;35(8):1428-37. doi: 10.1007/s00134-009-1527-z. Epub 2009 Jun 17.
9
Comparing computer-interpretable guideline models: a case-study approach.
J Am Med Inform Assoc. 2003 Jan-Feb;10(1):52-68. doi: 10.1197/jamia.m1135.
10

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验