Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO, 63110, USA.
Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, 63110, USA.
Med Phys. 2019 Sep;46(9):4135-4147. doi: 10.1002/mp.13716. Epub 2019 Aug 7.
The superior soft-tissue contrast achieved using magnetic resonance imaging (MRI) compared to x-ray computed tomography (CT) has led to the popularization of MRI-guided radiation therapy (MR-IGRT), especially in recent years with the advent of first and second generation MRI-based therapy delivery systems for MR-IGRT. The expanding use of these systems is driving interest in MRI-only RT workflows in which MRI is the sole imaging modality used for treatment planning and dose calculations. To enable such a workflow, synthetic CT (sCT) data must be generated based on a patient's MRI data so that dose calculations may be performed using the electron density information derived from CT images. In this study, we propose a novel deep spatial pyramid convolutional framework for the MRI-to-CT image-to-image translation task and compare its performance to the well established U-Net architecture in a generative adversarial network (GAN) framework.
Our proposed framework utilizes atrous convolution in a method named atrous spatial pyramid pooling (ASPP) to significantly reduce the total number of parameters required to describe the model while effectively capturing rich, multi-scale structural information in a manner that is not possible in the conventional framework. The proposed framework consists of a generative model composed of stacked encoders and decoders separated by the ASPP module, where atrous convolution is applied at increasing rates in parallel to encode large-scale features. The performance of the proposed method is compared to that of the conventional GAN framework in terms of the time required to train the model and the image quality of the generated sCT as measured by the root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) depending on the size of the training data set. Dose calculations based on sCT data generated using the proposed architecture are also compared to clinical plans to evaluate the dosimetric accuracy of the method.
Significant reductions in training time and improvements in image quality are observed at every training data set size when the proposed framework is adopted instead of the conventional framework. Over 1042 test images, values of 17.7 ± 4.3 HU, 0.9995 ± 0.0003, and 71.7 ± 2.3 are observed for the RMSE, SSIM, and PSNR metrics, respectively. Dose distributions calculated based on sCT data generated using the proposed framework demonstrate passing rates equal to or greater than 98% using the 3D gamma index with a 2%/2 mm criterion.
The deep spatial pyramid convolutional framework proposed here demonstrates improved performance compared to the conventional GAN framework that has been applied to the image-to-image translation task of sCT generation. Adopting the method is a first step toward an MRI-only RT workflow that enables widespread clinical applications for MR-IGRT including online adaptive therapy.
与 X 射线计算机断层扫描(CT)相比,磁共振成像(MRI)所获得的优越软组织对比度导致了 MRI 引导的放射治疗(MR-IGRT)的普及,尤其是近年来随着第一代和第二代基于 MRI 的治疗输送系统的出现用于 MR-IGRT。这些系统的广泛使用激发了对仅使用 MRI 的放射治疗工作流程的兴趣,在该工作流程中,MRI 是用于治疗计划和剂量计算的唯一成像方式。为了实现这样的工作流程,必须根据患者的 MRI 数据生成合成 CT(sCT)数据,以便可以使用从 CT 图像得出的电子密度信息进行剂量计算。在这项研究中,我们提出了一种新颖的基于深度空间金字塔卷积的 MRI-CT 图像到图像翻译任务的框架,并在生成对抗网络(GAN)框架中比较了其与成熟的 U-Net 架构的性能。
我们提出的框架在名为空洞空间金字塔池化(ASPP)的方法中利用空洞卷积,以大大减少描述模型所需的总参数数量,同时以传统框架不可能的方式有效捕获丰富的多尺度结构信息。该框架由堆叠的编码器和解码器组成,由 ASPP 模块分隔,在该模块中,应用空洞卷积以递增的速率进行编码,以编码大规模特征。根据训练数据集的大小,通过均方根误差(RMSE)、结构相似性指数(SSIM)和峰值信噪比(PSNR)来衡量生成的 sCT 的图像质量,以及基于 sCT 数据的剂量计算,比较了所提出的方法与传统 GAN 框架的性能。
采用所提出的框架而不是传统框架时,在每个训练数据集大小下,都观察到训练时间的显著减少和图像质量的提高。在超过 1042 张测试图像上,观察到 RMSE、SSIM 和 PSNR 指标的平均值分别为 17.7±4.3 HU、0.9995±0.0003 和 71.7±2.3。基于所提出的框架生成的 sCT 数据计算的剂量分布使用 3D 伽玛指数(2%/2mm 标准),通过率等于或大于 98%。
与已经应用于 sCT 生成图像到图像翻译任务的传统 GAN 框架相比,这里提出的深度空间金字塔卷积框架显示出了改进的性能。采用该方法是迈向仅使用 MRI 的放射治疗工作流程的第一步,该工作流程能够广泛应用于包括在线自适应治疗在内的 MR-IGRT。