IEEE J Biomed Health Inform. 2022 Feb;26(2):673-684. doi: 10.1109/JBHI.2021.3095476. Epub 2022 Feb 4.
Effective fusion of multimodal magnetic resonance imaging (MRI) is of great significance to boost the accuracy of glioma grading thanks to the complementary information provided by different imaging modalities. However, how to extract the common and distinctive information from MRI to achieve complementarity is still an open problem in information fusion research. In this study, we propose a deep neural network model termed as multimodal disentangled variational autoencoder (MMD-VAE) for glioma grading based on radiomics features extracted from preoperative multimodal MRI images. Specifically, the radiomics features are quantized and extracted from the region of interest for each modality. Then, the latent representations of variational autoencoder for these features are disentangled into common and distinctive representations to obtain the shared and complementary data among modalities. Afterwards, cross-modality reconstruction loss and common-distinctive loss are designed to ensure the effectiveness of the disentangled representations. Finally, the disentangled common and distinctive representations are fused to predict the glioma grades, and SHapley Additive exPlanations (SHAP) is adopted to quantitatively interpret and analyze the contribution of the important features to grading. Experimental results on two benchmark datasets demonstrate that the proposed MMD-VAE model achieves encouraging predictive performance (AUC:0.9939) on a public dataset, and good generalization performance (AUC:0.9611) on a cross-institutional private dataset. These quantitative results and interpretations may help radiologists understand gliomas better and make better treatment decisions for improving clinical outcomes.
多模态磁共振成像(MRI)的有效融合对于提高胶质瘤分级的准确性具有重要意义,因为不同成像模态提供了互补信息。然而,如何从 MRI 中提取共同和独特的信息以实现互补性,仍然是信息融合研究中的一个开放性问题。在这项研究中,我们提出了一种基于术前多模态 MRI 图像提取的放射组学特征的深度学习神经网络模型,称为多模态解缠变分自动编码器(MMD-VAE),用于胶质瘤分级。具体来说,从每个模态的感兴趣区域量化和提取放射组学特征。然后,将变分自动编码器的这些特征的潜在表示解缠为共同和独特的表示,以获得模态之间的共享和互补数据。之后,设计跨模态重建损失和共同-独特损失,以确保解缠表示的有效性。最后,融合解缠的共同和独特表示以预测胶质瘤等级,并采用 Shapley Additive exPlanations(SHAP)对重要特征对分级的贡献进行定量解释和分析。在两个基准数据集上的实验结果表明,所提出的 MMD-VAE 模型在公共数据集上实现了令人鼓舞的预测性能(AUC:0.9939),在跨机构私人数据集上实现了良好的泛化性能(AUC:0.9611)。这些定量结果和解释可能有助于放射科医生更好地了解胶质瘤,并做出更好的治疗决策,以提高临床结果。