Suppr超能文献

揭开黑箱的神秘面纱:神经危重症中预测模型可解释性的重要性。

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care.

机构信息

Department of Clinical Physics & Bioengineering, NHS Greater Glasgow and Clyde, Room 2.41, Level 2, New Lister Building, Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK.

School of Medicine, Dentistry and Nursing, University of Glasgow, Glasgow, UK.

出版信息

Neurocrit Care. 2022 Aug;37(Suppl 2):185-191. doi: 10.1007/s12028-022-01504-4. Epub 2022 May 6.

Abstract

Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.

摘要

神经危重症患者是一类复杂的患者群体,为了辅助临床决策,此前已经开发了许多模型和评分系统。最近,机器学习领域的技术已被应用于神经危重症患者数据,以开发具有高预测准确性的模型。然而,尽管这些最近的模型在临床上似乎很有前景,但它们的可解释性通常没有得到考虑,而且它们往往是黑盒模型,使得很难理解模型是如何得出结论的。可解释的机器学习方法有可能提供克服这些问题的手段,但在神经危重症领域尚未得到广泛探索。本文从可解释性的角度审视了神经危重症中使用的现有模型。此外,还将探讨可解释的机器学习的使用,特别是这些技术应用于神经危重症数据时可能具有的潜在优势和缺点。找到解决模型解释、透明度和问责制缺乏的方法很重要,因为这些问题有可能影响到模型的信任度和临床接受度,而且,监管规定越来越要求模型和算法做出的决策要有解释的权利。为了确保能够实现神经危重症护理提供中复杂预测模型的预期收益,必须充分考虑这些模型的可解释性。

相似文献

6
Opening the black box of AI-Medicine.打开 AI 医学的黑箱。
J Gastroenterol Hepatol. 2021 Mar;36(3):581-584. doi: 10.1111/jgh.15384.
7
Big Data/AI in Neurocritical Care: Maybe/Summary.大数据/人工智能在神经重症监护中的应用:也许/总结。
Neurocrit Care. 2022 Aug;37(Suppl 2):166-169. doi: 10.1007/s12028-021-01422-x. Epub 2021 Dec 29.

引用本文的文献

4
Accelerated and Interpretable Oblique Random Survival Forests.加速且可解释的斜向随机生存森林
J Comput Graph Stat. 2024;33(1):192-207. doi: 10.1080/10618600.2023.2231048.

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验