Suppr超能文献

神经影像深度学习中的解剖可解释性:典型衰老和创伤性脑损伤的显著性方法。

Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury.

作者信息

Guo Kevin, Chaudhari Nikhil, Jafar Tamara, Chowdhury Nahian, Bogdan Paul, Irimia Andrei

机构信息

Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California.

Corwin D. Denney Research Center, Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California.

出版信息

Res Sq. 2024 Oct 16:rs.3.rs-4960427. doi: 10.21203/rs.3.rs-4960427/v1.

Abstract

The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization of relevant brain features. This study compares seven popular attribution-based saliency approaches to assign neuroanatomic interpretability to DNNs that estimate biological brain age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults ( males; mean age: 65.82 ± 8.89 years) are included for DNN training, testing, validation, and saliency map generation to estimate BA. To study saliency robustness to the presence of anatomic deviations from normality, saliency maps are also generated for adults with mild traumatic brain injury (mTBI, males; mean age: 55.3 ± 9.9 years). We assess saliency methods' capacities to capture known anatomic features of brain aging and compare them to a surrogate ground truth whose anatomic saliency is known . Anatomic aging features are identified most reliably by the integrated gradients method, which outperforms all others through its ability to localize relevant anatomic features. Gradient Shapley additive explanations, input × gradient, and masked gradient perform less consistently but still highlight ubiquitous neuroanatomic features of aging (ventricle dilation, hippocampal atrophy, sulcal widening). Saliency methods involving gradient saliency, guided backpropagation, and guided gradient-weight class attribution mapping localize saliency outside the brain, which is undesirable. Our research suggests the relative tradeoffs of saliency methods to interpret DNN findings during BA estimation in typical aging and after mTBI.

摘要

深度神经网络(DNN)的黑箱性质使得研究人员和临床医生对依赖其研究结果持谨慎态度。显著性图可以通过指出相关脑特征的解剖定位来增强DNN的可解释性。本研究比较了七种流行的基于归因的显著性方法,以便为从磁共振成像(MRI)估计生物脑龄(BA)的DNN赋予神经解剖学可解释性。纳入认知正常(CN)的成年人(男性;平均年龄:65.82±8.89岁)进行DNN训练、测试、验证和生成显著性图以估计BA。为了研究显著性对解剖结构偏离正常情况的鲁棒性,还为轻度创伤性脑损伤(mTBI,男性;平均年龄:55.3±9.9岁)的成年人生成了显著性图。我们评估显著性方法捕捉脑老化已知解剖特征的能力,并将它们与解剖学显著性已知的替代地面真值进行比较。通过综合梯度方法最可靠地识别了解剖学老化特征,该方法通过定位相关解剖特征的能力优于所有其他方法。梯度Shapley加法解释、输入×梯度和掩码梯度的表现不太一致,但仍突出了老化普遍存在的神经解剖学特征(脑室扩张、海马萎缩、脑沟增宽)。涉及梯度显著性、引导反向传播和引导梯度权重类归属映射的显著性方法将显著性定位在脑外,这是不理想的。我们的研究表明了在典型老化和mTBI后BA估计过程中,显著性方法在解释DNN结果方面的相对权衡。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4b38/11527355/92a1d00cd687/nihpp-rs4960427v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验