• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迈向可解释的人工智能模型决策,用于预测缺血性中风患者的功能结局

Moving Toward Explainable Decisions of Artificial Intelligence Models for the Prediction of Functional Outcomes of Ischemic Stroke Patients

作者信息

Zihni Esra, McGarry Bryony L., Kelleher John D.

机构信息

PRECISE4Q, Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland

School of Psychological Science, University of Bristol, Bristol, UK

DOI:10.36255/exon-publications-digital-health-explainable-decisions
PMID:35605071
Abstract

Artificial intelligence has the potential to assist clinical decision-making for the treatment of ischemic stroke. However, the decision processes encoded within complex artificial intelligence models, such as neural networks, are notoriously difficult to interpret and validate. The importance of explaining model decisions has resulted in the emergence of explainable artificial intelligence, which aims to understand the inner workings of artificial intelligence models. Here, we give examples of studies that apply artificial intelligence models to predict functional outcomes of ischemic stroke patients, evaluate existing models’ predictive power, and discuss the challenges that limit their adaptation to the clinic. Furthermore, we identify the studies that explain which model features are essential in predicting functional outcomes. We discuss how these explanations can help mitigate concerns around the trustworthiness of artificial intelligence systems developed for the acute stroke setting. We conclude that explainable artificial intelligence is a must for the reliable deployment of artificial intelligence models in acute stroke care.

摘要

人工智能有潜力辅助缺血性中风治疗的临床决策。然而,复杂人工智能模型(如神经网络)中编码的决策过程 notoriously difficult to interpret and validate。解释模型决策的重要性导致了可解释人工智能的出现,其旨在理解人工智能模型的内部运作。在此,我们给出一些研究示例,这些研究应用人工智能模型预测缺血性中风患者的功能结局,评估现有模型的预测能力,并讨论限制其应用于临床的挑战。此外,我们确定了那些解释哪些模型特征对预测功能结局至关重要的研究。我们讨论了这些解释如何有助于减轻对为急性中风场景开发的人工智能系统可信度的担忧。我们得出结论,可解释人工智能对于人工智能模型在急性中风护理中的可靠部署至关重要。

相似文献

1
Moving Toward Explainable Decisions of Artificial Intelligence Models for the Prediction of Functional Outcomes of Ischemic Stroke Patients迈向可解释的人工智能模型决策,用于预测缺血性中风患者的功能结局
2
Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal.基于 EEG 信号的中风预测可解释人工智能模型。
Sensors (Basel). 2022 Dec 15;22(24):9859. doi: 10.3390/s22249859.
3
Enhanced joint hybrid deep neural network explainable artificial intelligence model for 1-hr ahead solar ultraviolet index prediction.用于 1 小时提前太阳紫外线指数预测的增强型关节混合深度神经网络可解释人工智能模型。
Comput Methods Programs Biomed. 2023 Nov;241:107737. doi: 10.1016/j.cmpb.2023.107737. Epub 2023 Aug 5.
4
Explainable Artificial Intelligence for Predictive Modeling in Healthcare.用于医疗保健预测建模的可解释人工智能
J Healthc Inform Res. 2022 Feb 11;6(2):228-239. doi: 10.1007/s41666-022-00114-1. eCollection 2022 Jun.
5
Causality and scientific explanation of artificial intelligence systems in biomedicine.生物医学中人工智能系统的因果关系与科学解释。
Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.
6
Interpreting Stroke-Impaired Electromyography Patterns through Explainable Artificial Intelligence.通过可解释人工智能解读脑卒中后肌电图模式。
Sensors (Basel). 2024 Feb 21;24(5):1392. doi: 10.3390/s24051392.
7
An innovative artificial intelligence-based method to compress complex models into explainable, model-agnostic and reduced decision support systems with application to healthcare (NEAR).一种创新的基于人工智能的方法,可将复杂模型压缩为可解释的、与模型无关的和简化的决策支持系统,并应用于医疗保健领域(NEAR)。
Artif Intell Med. 2024 May;151:102841. doi: 10.1016/j.artmed.2024.102841. Epub 2024 Mar 12.
8
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
9
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
10
GPT-4 as a Clinical Decision Support Tool in Ischemic Stroke Management: Evaluation Study.GPT-4作为缺血性卒中管理中的临床决策支持工具:评估研究
JMIR AI. 2025 Mar 7;4:e60391. doi: 10.2196/60391.