Zihni Esra, McGarry Bryony L., Kelleher John D.
PRECISE4Q, Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland
School of Psychological Science, University of Bristol, Bristol, UK
Artificial intelligence has the potential to assist clinical decision-making for the treatment of ischemic stroke. However, the decision processes encoded within complex artificial intelligence models, such as neural networks, are notoriously difficult to interpret and validate. The importance of explaining model decisions has resulted in the emergence of explainable artificial intelligence, which aims to understand the inner workings of artificial intelligence models. Here, we give examples of studies that apply artificial intelligence models to predict functional outcomes of ischemic stroke patients, evaluate existing models’ predictive power, and discuss the challenges that limit their adaptation to the clinic. Furthermore, we identify the studies that explain which model features are essential in predicting functional outcomes. We discuss how these explanations can help mitigate concerns around the trustworthiness of artificial intelligence systems developed for the acute stroke setting. We conclude that explainable artificial intelligence is a must for the reliable deployment of artificial intelligence models in acute stroke care.
人工智能有潜力辅助缺血性中风治疗的临床决策。然而,复杂人工智能模型(如神经网络)中编码的决策过程 notoriously difficult to interpret and validate。解释模型决策的重要性导致了可解释人工智能的出现,其旨在理解人工智能模型的内部运作。在此,我们给出一些研究示例,这些研究应用人工智能模型预测缺血性中风患者的功能结局,评估现有模型的预测能力,并讨论限制其应用于临床的挑战。此外,我们确定了那些解释哪些模型特征对预测功能结局至关重要的研究。我们讨论了这些解释如何有助于减轻对为急性中风场景开发的人工智能系统可信度的担忧。我们得出结论,可解释人工智能对于人工智能模型在急性中风护理中的可靠部署至关重要。