Suppr超能文献

可解释人工智能指向阿尔茨海默病识别的脑白质高信号:初步研究。

Explainable AI Points to White Matter Hyperintensities for Alzheimer's Disease Identification: a Preliminary Study.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:484-487. doi: 10.1109/EMBC48229.2022.9871306.

Abstract

Deep Learning approaches are powerful tools in a great variety of classification tasks. However, they are limitedly accepted or trusted in clinical frameworks due to their typical "black box" outline: their architecture is well-known, but processes employed in classification are often inaccessible to humans. With this work, we explored the problem of "Explainable AI" (XAI) in Alzheimer's disease (AD) classification tasks. Data from a neuroimaging cohort (n = 251 from OASIS-3) of early-stage AD dementia and healthy controls (HC) were analysed. The MR scans were initially fed to a pre-trained DL model, which achieved good performance on the test set (AUC: 0.82, TPR: 0.78, TNR: 0.81). Results were then investigated by means of an XAI approach (Occlusion Sensitivity method) that provided measures of relevance (RV) as outcome. We compared RV values obtained within healthy tissues with those underlying white matter hyperintensity (WMH) lesions. The analysis was conducted on 4 different groups of data, obtained by stratifying correct and misclassified images according to the health condition of participants (AD/HC). Results highlighted that the DL model found favourable leveraging lesioned brain areas for AD identification. A statistically significant difference ( ) between WMH and healthy tissue contributions was indeed observed for AD recognition, differently from the HC case ( p=0.27). Clinical Relevance - This study, though preliminary, suggested that DL models might be trained to use known clinical information and reinforced the role of WMHs as neuroimaging biomarker for AD dementia. The outlined findings have a significant clinical relevance as they prepare the ground for a progressive increase in the level of trust laid in DL approaches.

摘要

深度学习方法在各种分类任务中是强有力的工具。然而,由于其典型的“黑箱”结构,它们在临床框架中被限制接受或信任:它们的架构是众所周知的,但在分类中使用的过程往往是人类无法理解的。在这项工作中,我们探讨了阿尔茨海默病(AD)分类任务中的“可解释人工智能”(XAI)问题。分析了来自神经影像学队列(OASIS-3 中的 n = 251 名早期 AD 痴呆症和健康对照组(HC))的数据。MR 扫描最初被输入到一个预先训练的深度学习模型中,该模型在测试集上表现出良好的性能(AUC:0.82,TPR:0.78,TNR:0.81)。然后,通过一种 XAI 方法(遮挡敏感性方法)来研究结果,该方法提供了相关性(RV)作为结果的度量。我们比较了在健康组织内获得的 RV 值与在脑白质高信号(WMH)病变下获得的 RV 值。该分析是在 4 个不同的数据组中进行的,这些数据组是根据参与者的健康状况(AD/HC)对正确和错误分类的图像进行分层而获得的。结果表明,该深度学习模型发现有利于利用受损大脑区域来识别 AD。事实上,在 AD 识别中观察到了 WMH 和健康组织贡献之间存在统计学显著差异( ),而在 HC 病例中则没有(p=0.27)。临床相关性-尽管这项研究是初步的,但它表明,深度学习模型可以接受训练,以利用已知的临床信息,并强化了 WMH 作为 AD 痴呆症神经影像学生物标志物的作用。所概述的发现具有重要的临床相关性,因为它们为在深度学习方法中逐步增加信任度奠定了基础。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验