MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada.
CHUM Research Center, Montreal, QC, Canada.
Phys Med Biol. 2021 Apr 23;66(9). doi: 10.1088/1361-6560/abf1bb.
With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.
随着在线磁共振放疗治疗的出现,基于磁共振的工作流程在临床工作流程中的重要性日益增加。然而,由于骨性结构的存在,适当的剂量规划仍然需要 CT 图像来计算剂量衰减。在本文中,我们提出了一种新的深度图像合成模型,该模型可以在没有监督的情况下从诊断性 MRI 生成用于放疗计划的 CT 图像。所提出的模型基于生成对抗网络(GAN),由学习新的不变表示组成,以基于高频和外观模式生成合成 CT(sCT)图像。这种新的表示编码了卷积 GAN 鉴别器的每个卷积特征图,从而使所提出的模型的训练在图像合成质量方面特别稳健。我们的模型包括在训练过程中分析常见的直方图特征,从而增强了生成器,使得输出的 sCT 图像具有与真实 CT 匹配的直方图。然后,通过评估鉴别器网络所有层的评估,将匹配 CT 的直方图嵌入到多分辨率框架中,从而使模型能够稳健地对输出的合成图像进行分类。我们对 56 名患有各种形状大小和空间图像分辨率的头颈部癌症患者的图像进行了实验。与其他生成模型相比,所提出模型的实验结果证实了其效率,其中我们模型的平均绝对误差为 26.44(0.62),HU 误差为 45.3(1.87),总体 Dice 系数为 0.74(0.05),证明了合成模型在放疗计划应用中的潜力。