Martin Sophie A, Zhao An, Qu Jiongqi, Imms Phoebe, Irimia Andrei, Barkhof Frederik, Cole James H
UCL Hawkes Institute, University College London, London, WC1E 6BT, UK.
UCL Queen Square Institute of Neurology, University College London, UK.
medRxiv. 2025 Feb 14:2025.01.13.25320382. doi: 10.1101/2025.01.13.25320382.
Artificial intelligence and neuroimaging enable accurate dementia prediction, but 'black box' models can be difficult to trust. Explainable artificial intelligence (XAI) describes techniques to understand model behaviour and the influence of features, however deciding which method is most appropriate is non-trivial. Vision transformers (ViT) have also gained popularity, providing a self-explainable, alternative to traditional convolutional neural networks (CNN).
We used T1-weighted MRI to train models on two tasks: Alzheimer's disease (AD) classification (diagnosis) and predicting conversion from mild-cognitive impairment (MCI) to AD (prognosis). We compared ten XAI methods across CNN and ViT architectures.
Models achieved balanced accuracies of 81% and 67% for diagnosis and prognosis. XAI outputs highlighted brain regions relevant to AD and contained useful information for MCI prognosis.
XAI can be used to verify that models are utilising relevant features and to generate valuable measures for further analysis.
人工智能和神经成像技术能够实现准确的痴呆症预测,但“黑匣子”模型可能难以让人信赖。可解释人工智能(XAI)描述了理解模型行为和特征影响的技术,然而确定哪种方法最合适并非易事。视觉Transformer(ViT)也越来越受欢迎,它为传统卷积神经网络(CNN)提供了一种可自我解释的替代方案。
我们使用T1加权磁共振成像在两项任务上训练模型:阿尔茨海默病(AD)分类(诊断)和预测从轻度认知障碍(MCI)向AD的转化(预后)。我们在CNN和ViT架构中比较了十种XAI方法。
模型在诊断和预后方面的平衡准确率分别达到了81%和67%。XAI输出突出了与AD相关的脑区,并包含了对MCI预后有用的信息。
XAI可用于验证模型是否在利用相关特征,并生成有价值的指标以供进一步分析。