Emami Hajar, Dong Ming, Nejad-Davarani Siamak P, Glide-Hurst Carri K
Department of Computer Science, Wayne State University, Detroit, MI, 48202, USA.
Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, 48202, USA.
Med Phys. 2018 Jun 14. doi: 10.1002/mp.13047.
While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This work describes a novel method for generating brain synCTs based on generative adversarial networks (GANs), a deep learning model that trains two competing networks simultaneously, and compares it to a deep convolutional neural network (CNN).
Post-Gadolinium T1-Weighted and CT-SIM images from fifteen brain cancer patients were retrospectively analyzed. The GAN model was developed to generate synCTs using T1-weighted MRI images as the input using a residual network (ResNet) as the generator. The discriminator is a CNN with five convolutional layers that classified the input image as real or synthetic. Fivefold cross-validation was performed to validate our model. GAN performance was compared to CNN based on mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics between the synCT and CT images.
GAN training took ~11 h with a new case testing time of 5.7 ± 0.6 s. For GAN, MAEs between synCT and CT-SIM were 89.3 ± 10.3 Hounsfield units (HU) and 41.9 ± 8.6 HU across the entire FOV and tissues, respectively. However, MAE in the bone and air was, on average, ~240-255 HU. By comparison, the CNN model had an average full FOV MAE of 102.4 ± 11.1 HU. For GAN, the mean PSNR was 26.6 ± 1.2 and SSIM was 0.83 ± 0.03. GAN synCTs preserved details better than CNN, and regions of abnormal anatomy were well represented on GAN synCTs.
We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain.
虽然使用合成CT(synCT)进行仅基于磁共振成像(MR)的治疗计划为简化临床工作流程提供了可能,但仍需要一种在脑部高效自动生成synCT的方法,以促进近乎实时的仅基于MR的计划制定。这项工作描述了一种基于生成对抗网络(GAN)生成脑部synCT的新方法,GAN是一种同时训练两个相互竞争网络的深度学习模型,并将其与深度卷积神经网络(CNN)进行比较。
回顾性分析了15例脑癌患者的钆增强T1加权和CT模拟图像。GAN模型被开发用于以T1加权MRI图像作为输入,使用残差网络(ResNet)作为生成器来生成synCT。判别器是一个具有五个卷积层的CNN,用于将输入图像分类为真实图像或合成图像。进行了五折交叉验证以验证我们的模型。基于synCT和CT图像之间的平均绝对误差(MAE)、结构相似性指数(SSIM)和峰值信噪比(PSNR)指标,将GAN的性能与CNN进行比较。
GAN训练耗时约11小时,新病例测试时间为5.7±0.6秒。对于GAN,在整个视野(FOV)和组织中,synCT与CT模拟之间的MAE分别为89.3±10.3亨氏单位(HU)和41.9±8.6 HU。然而,骨骼和空气区域的MAE平均约为240 - 255 HU。相比之下,CNN模型的全视野平均MAE为102.4±11.1 HU。对于GAN,平均PSNR为26.6±1.2,SSIM为0.83±0.03。GAN synCT比CNN能更好地保留细节,并且在GAN synCT上能很好地呈现异常解剖区域。
我们开发并验证了一种以单个T1加权MR图像作为输入的GAN模型,该模型能在数秒内生成稳健、高质量的synCT。我们的方法为支持脑部近乎实时的仅基于MR的治疗计划提供了强大的潜力。