Suppr超能文献

Moving Toward Explainable Decisions of Artificial Intelligence Models for the Prediction of Functional Outcomes of Ischemic Stroke Patients

作者信息

Zihni Esra, McGarry Bryony L., Kelleher John D.

机构信息

PRECISE4Q, Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland

School of Psychological Science, University of Bristol, Bristol, UK

Abstract

Artificial intelligence has the potential to assist clinical decision-making for the treatment of ischemic stroke. However, the decision processes encoded within complex artificial intelligence models, such as neural networks, are notoriously difficult to interpret and validate. The importance of explaining model decisions has resulted in the emergence of explainable artificial intelligence, which aims to understand the inner workings of artificial intelligence models. Here, we give examples of studies that apply artificial intelligence models to predict functional outcomes of ischemic stroke patients, evaluate existing models’ predictive power, and discuss the challenges that limit their adaptation to the clinic. Furthermore, we identify the studies that explain which model features are essential in predicting functional outcomes. We discuss how these explanations can help mitigate concerns around the trustworthiness of artificial intelligence systems developed for the acute stroke setting. We conclude that explainable artificial intelligence is a must for the reliable deployment of artificial intelligence models in acute stroke care.

摘要

相似文献

2
Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal.
Sensors (Basel). 2022 Dec 15;22(24):9859. doi: 10.3390/s22249859.
3
Enhanced joint hybrid deep neural network explainable artificial intelligence model for 1-hr ahead solar ultraviolet index prediction.
Comput Methods Programs Biomed. 2023 Nov;241:107737. doi: 10.1016/j.cmpb.2023.107737. Epub 2023 Aug 5.
4
Explainable Artificial Intelligence for Predictive Modeling in Healthcare.
J Healthc Inform Res. 2022 Feb 11;6(2):228-239. doi: 10.1007/s41666-022-00114-1. eCollection 2022 Jun.
5
Causality and scientific explanation of artificial intelligence systems in biomedicine.
Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.
8
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验