Suppr超能文献

超越显著性图:训练深度模型来解释深度模型

Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models.

作者信息

Liu Zixuan, Adeli Ehsan, Pohl Kilian M, Zhao Qingyu

机构信息

Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA.

Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305, USA.

出版信息

Inf Process Med Imaging. 2021 Jun;12729:71-82. doi: 10.1007/978-3-030-78191-0_6. Epub 2021 Jun 14.

Abstract

Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on . We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer's disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.

摘要

可解释性是应用复杂深度学习模型以推进神经影像学研究中对脑部疾病理解的关键因素。为了解释训练好的分类器的决策过程,现有技术通常依赖于通过偏导数来量化体素级或特征级别的分类重要性。尽管提供了一定程度的定位,但从神经科学角度来看,这些映射图并不易于人类理解,因为它们通常无法告知与脑部疾病相关的具体形态变化类型。受图像到图像转换方案的启发,我们建议训练模拟器网络,基于扭曲操作将疾病模式注入(或去除)到给定的磁共振成像(MRI)中,以使分类器增加(或降低)其将模拟MRI标记为患病的置信度。为了提高训练的稳健性,我们建议基于[此处缺失具体内容]将两个模拟器耦合到一个统一模型中。我们将我们的方法应用于解释在合成数据集和两个神经影像学数据集上训练的分类器,以可视化阿尔茨海默病和酒精依赖的影响。与基线方法生成的显著性映射图相比,我们基于扭曲场雅可比行列式的模拟和可视化揭示了与疾病相关的有意义且易于理解的模式。

相似文献

1
Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models.超越显著性图:训练深度模型来解释深度模型
Inf Process Med Imaging. 2021 Jun;12729:71-82. doi: 10.1007/978-3-030-78191-0_6. Epub 2021 Jun 14.
5
SaRF: Saliency regularized feature learning improves MRI sequence classification.SaRF:显著度正则化特征学习可提高 MRI 序列分类。
Comput Methods Programs Biomed. 2024 Jan;243:107867. doi: 10.1016/j.cmpb.2023.107867. Epub 2023 Oct 20.

本文引用的文献

2
Preparing Medical Imaging Data for Machine Learning.医学影像数据的机器学习准备
Radiology. 2020 Apr;295(1):4-15. doi: 10.1148/radiol.2020192224. Epub 2020 Feb 18.
3
Applications of Deep Learning to Neuro-Imaging Techniques.深度学习在神经成像技术中的应用。
Front Neurol. 2019 Aug 14;10:869. doi: 10.3389/fneur.2019.00869. eCollection 2019.
8
The role of the orbitofrontal cortex in alcohol use, abuse, and dependence.眶额叶皮质在酒精使用、滥用及依赖中的作用。
Prog Neuropsychopharmacol Biol Psychiatry. 2018 Dec 20;87(Pt A):85-107. doi: 10.1016/j.pnpbp.2018.01.010. Epub 2018 Feb 9.
9
Brain atrophy in Alzheimer's Disease and aging.阿尔茨海默病和衰老中的脑萎缩。
Ageing Res Rev. 2016 Sep;30:25-48. doi: 10.1016/j.arr.2016.01.002. Epub 2016 Jan 28.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验