Jin Weina, Li Xiaoxiao, Fatehi Mostafa, Hamarneh Ghassan
School of Computing Science, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada.
MethodsX. 2023 Jan 10;10:102009. doi: 10.1016/j.mex.2023.102009. eCollection 2023.
Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.
将基于深度神经网络(DNN)的模型作为临床决策助手进行部署时,解释来自医学图像输入的模型决策是必要的。在实践中,获取多模态医学图像对于支持临床决策过程非常普遍。多模态图像捕获相同潜在感兴趣区域的不同方面。因此,解释DNN对多模态医学图像的决策是一个具有临床重要性的问题。我们的方法采用常用的事后人工智能特征归因方法来解释DNN对多模态医学图像的决策,包括两类基于梯度和扰动的方法。•基于梯度的解释方法——如引导反向传播、深度提升——利用梯度信号来估计模型预测的特征重要性。•基于扰动的方法——如遮挡、局部可解释模型无关解释(LIME)、内核SHAP——利用输入-输出采样对来估计特征重要性。•我们描述了如何使这些方法适用于多模态图像输入的实现细节,并提供了实现代码。