Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada.
Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada.
BMC Med Imaging. 2023 Dec 7;23(1):203. doi: 10.1186/s12880-023-01160-w.
This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time.
We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images.
The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance.
The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.
本研究提出了一种端到端的无监督医学融合生成对抗网络(MedFusionGAN),用于融合计算机断层扫描(CT)和高分辨率各向同性 3D T1-Gd 磁共振成像(MRI)图像序列,生成具有 CT 骨骼结构和 MRI 软组织对比度的图像,以改善靶区勾画并减少放射治疗计划时间。
我们使用了来自癌症成像档案(Cancer Imaging Archive)的一个公开的多中心医学数据集(GLIS-RT,230 名患者)。为了提高模型的泛化能力,我们考虑了不同的成像方案和不同脑肿瘤类型的患者,包括转移瘤。所提出的 MedFusionGAN 由一个生成器网络和一个鉴别器网络组成,它们在对抗场景中进行训练。内容、风格和 L1 损失被用于训练生成器,以保留 MRI 和 CT 图像的纹理和结构信息。
MedFusionGAN 成功地生成了具有 MRI 软组织和 CT 骨骼对比度的融合图像。MedFusionGAN 的结果与七种传统方法和八种深度学习(DL)最先进方法进行了定量和定性比较。定性方面,我们的方法以最高的空间分辨率融合了源图像,而没有添加图像伪影。我们报告了九个定量指标来量化融合图像中结构相似性、对比度、失真水平和图像边缘的保留情况。在九个指标中的六个指标上,我们的方法优于传统方法和 DL 方法。与传统方法和 DL 方法相比,在三个和两个定量指标上,我们的方法分别获得了第二和第三的性能排名。为了比较软组织对比度,对融合方法的肿瘤和肿瘤轮廓的强度曲线进行了评估。MedFusionGAN 提供了更一致、更好的强度曲线和更好的分割性能。
所提出的端到端无监督方法成功地融合了 MRI 和 CT 图像。融合图像可以改善靶区和 OARs 的勾画,这是放射治疗计划的一个重要方面。