Medical Imaging and Diagnostics Lab, National Centre of Artificial Intelligence (NCAI), Pakistan; Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan.
Comput Biol Med. 2021 Jun;133:104410. doi: 10.1016/j.compbiomed.2021.104410. Epub 2021 Apr 19.
Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.
医学图像分割是诊断程序(如脑肿瘤检测)中最复杂但也是最重要的任务之一。一些 3D 卷积神经网络(CNN)架构在脑肿瘤分割方面取得了显著的成果。然而,由于 CNN 的黑盒性质,将此类模型集成到诊断和治疗决策中在医疗保健领域存在高风险。由于缺乏可解释性,很难解释模型预测的基本原理。因此,深度学习模型在医学领域的成功部署需要准确和透明的预测。在本文中,我们通过扩展后处理可解释性技术,生成 3D 可视化解释来分析 3D 脑肿瘤分割模型。我们探讨了无梯度可解释性方法相对于基于梯度的方法的优势。此外,我们解释了分割模型对输入磁共振成像(MRI)图像的行为,并研究了模型的预测策略。我们还对医学图像分割任务的可解释性方法进行了定量评估。为了推断我们的可视化解释不代表虚假信息,我们对扩展的方法进行了定量验证。我们了解到,模型捕捉到的信息与人类专家的领域知识是一致的,因此更值得信赖。我们使用 BraTS-2018 数据集来训练 3D 脑肿瘤分割网络,并进行可解释性实验以生成可视化解释。