Department of Physics and Computational Radiology, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Sognsvannsveien 20, 0372, Oslo, Norway.
Biomedical Data Science Laboratory,Instituto Universitario de Tecnologias de la Informacion Comunicaciones, Universitat Politècnica de València, 46022, Valencia, Spain.
BMC Med Inform Decis Mak. 2023 Oct 18;23(1):225. doi: 10.1186/s12911-023-02320-2.
Saliency-based algorithms are able to explain the relationship between input image pixels and deep-learning model predictions. However, it may be difficult to assess the clinical value of the most important image features and the model predictions derived from the raw saliency map. This study proposes to enhance the interpretability of saliency-based deep learning model for survival classification of patients with gliomas, by extracting domain knowledge-based information from the raw saliency maps.
Our study includes presurgical T1-weighted (pre- and post-contrast), T2-weighted and T2-FLAIR MRIs of 147 glioma patients from the BraTs 2020 challenge dataset aligned to the SRI 24 anatomical atlas. Each image exam includes a segmentation mask and the information of overall survival (OS) from time of diagnosis (in days). This dataset was divided into training ([Formula: see text]) and validation ([Formula: see text]) datasets. The extent of surgical resection for all patients was gross total resection. We categorized the data into 42 short (mean [Formula: see text] days), 30 medium ([Formula: see text] days), and 46 long ([Formula: see text] days) survivors. A 3D convolutional neural network (CNN) trained on brain tumour MRI volumes classified all patients based on expected prognosis of either short-term, medium-term, or long-term survival. We extend the popular 2D Gradient-weighted Class Activation Mapping (Grad-CAM), for the generation of saliency map, to 3D and combined it with the anatomical atlas, to extract brain regions, brain volume and probability map that reveal domain knowledge-based information.
For each OS class, a larger tumor volume was associated with a shorter OS. There were 10, 7 and 27 tumor locations in brain regions that uniquely associate with the short-term, medium-term, and long-term survival, respectively. Tumors located in the transverse temporal gyrus, fusiform, and palladium are associated with short, medium and long-term survival, respectively. The visual and textual information displayed during OS prediction highlights tumor location and the contribution of different brain regions to the prediction of OS. This algorithm design feature assists the physician in analyzing and understanding different model prediction stages.
Domain knowledge-based information extracted from the saliency map can enhance the interpretability of deep learning models. Our findings show that tumors overlapping eloquent brain regions are associated with short patient survival.
基于显著度的算法能够解释输入图像像素与深度学习模型预测之间的关系。然而,评估来自原始显著度图的最重要图像特征和模型预测的临床价值可能较为困难。本研究通过从原始显著度图中提取基于领域知识的信息,提出了一种增强基于显著度的深度学习模型在胶质瘤患者生存分类中的可解释性的方法。
我们的研究包括来自 BraTs 2020 挑战赛数据集的 147 名胶质瘤患者的术前 T1 加权(增强前后)、T2 加权和 T2-FLAIR MRI,这些患者与 SRI 24 解剖图谱对齐。每个图像检查包括一个分割掩模和从诊断时开始的总生存(OS)信息(以天为单位)。该数据集分为训练集([Formula: see text])和验证集([Formula: see text])。所有患者的手术切除范围均为大体全切除。我们将数据分为 42 名短期(平均 OS 时间为[Formula: see text]天)、30 名中期([Formula: see text]天)和 46 名长期([Formula: see text]天)生存者。一个基于脑肿瘤 MRI 体积的 3D 卷积神经网络(CNN)对所有患者进行分类,根据短期、中期或长期生存的预期预后进行分类。我们将流行的 2D 梯度加权类激活映射(Grad-CAM)扩展到 3D,并与解剖图谱相结合,以提取揭示基于领域知识的信息的脑区、脑体积和概率图。
对于每个 OS 类别,较大的肿瘤体积与较短的 OS 相关。在与短期、中期和长期生存相关的脑区中,分别有 10、7 和 27 个肿瘤位置。位于横向颞叶、梭状回和钯的肿瘤分别与短期、中期和长期生存相关。在 OS 预测过程中显示的视觉和文本信息突出了肿瘤位置和不同脑区对 OS 预测的贡献。这种算法设计功能有助于医生分析和理解不同的模型预测阶段。
从显著度图中提取的基于领域知识的信息可以增强深度学习模型的可解释性。我们的研究结果表明,与大脑功能区重叠的肿瘤与患者生存时间较短相关。