Zhang Zeyu, Jiang Zhuoran, Zhong Hualiang, Lu Ke, Yin Fang-Fang, Ren Lei
Duke University Medical Center, Durham, North Carolina, USA.
Medical College of Wisconsin, Milwaukee, Wisconsin, USA.
Precis Radiat Oncol. 2022 Jun;6(2):110-118. doi: 10.1002/pro6.1163. Epub 2022 Jun 11.
Despite its prevalence, cone beam computed tomography (CBCT) has poor soft-tissue contrast, making it challenging to localize liver tumors. We propose a patient-specific deep learning model to generate synthetic magnetic resonance imaging (MRI) from CBCT to improve tumor localization.
A key innovation is using patient-specific CBCT-MRI image pairs to train a deep learning model to generate synthetic MRI from CBCT. Specifically, patient planning CT was deformably registered to prior MRI, and then used to simulate CBCT with simulated projections and Feldkamp, Davis, and Kress reconstruction. These CBCT-MRI images were augmented using translations and rotations to generate enough patient-specific training data. A U-Net-based deep learning model was developed and trained to generate synthetic MRI from CBCT in the liver, and then tested on a different CBCT dataset. Synthetic MRIs were quantitatively evaluated against ground-truth MRI.
The synthetic MRI demonstrated superb soft-tissue contrast with clear tumor visualization. On average, the synthetic MRI achieved 28.01, 0.025, and 0.929 for peak signal-to-noise ratio, mean square error, and structural similarity index, respectively, outperforming CBCT images. The model performance was consistent across all three patients tested.
Our study demonstrated the feasibility of a patient-specific model to generate synthetic MRI from CBCT for liver tumor localization, opening up a potential to democratize MRI guidance in clinics with conventional LINACs.
尽管锥形束计算机断层扫描(CBCT)应用广泛,但其软组织对比度较差,使得定位肝肿瘤具有挑战性。我们提出一种针对患者的深度学习模型,用于从CBCT生成合成磁共振成像(MRI),以改善肿瘤定位。
一项关键创新是使用针对患者的CBCT-MRI图像对来训练深度学习模型,以便从CBCT生成合成MRI。具体而言,将患者计划CT与先前的MRI进行变形配准,然后用于通过模拟投影和费尔德坎普、戴维斯和克雷斯重建来模拟CBCT。这些CBCT-MRI图像通过平移和旋转进行增强,以生成足够的针对患者的训练数据。开发并训练了一种基于U-Net的深度学习模型,用于从肝脏的CBCT生成合成MRI,然后在不同的CBCT数据集上进行测试。将合成MRI与真实MRI进行定量评估。
合成MRI显示出极好的软组织对比度,肿瘤可视化清晰。平均而言,合成MRI的峰值信噪比、均方误差和结构相似性指数分别达到28.01、0.025和0.929,优于CBCT图像。在所有测试的三名患者中,模型性能一致。
我们的研究证明了针对患者的模型从CBCT生成合成MRI用于肝肿瘤定位的可行性,为在配备传统直线加速器的诊所中普及MRI引导开辟了潜力。