Li Zhaotong, Huang Xinrui, Zhang Zeru, Liu Liangyou, Wang Fei, Li Sha, Gao Song, Xia Jun
Institute of Medical Technology, Peking University Health Science Center, Beijing, China.
Institute of Medical Humanities, Peking University, Beijing, China.
Quant Imaging Med Surg. 2022 Jun;12(6):3151-3169. doi: 10.21037/qims-21-846.
Magnetic resonance imaging (MRI) images synthesized from computed tomography (CT) data can provide more detailed information on pathological structures than that of CT data alone; thus, the synthesis of MRI has received increased attention especially in medical scenarios where only CT images are available. A novel convolutional neural network (CNN) combined with a contextual loss function was proposed for synthesis of T1- and T2-weighted images (T1WI and T2WI) from CT data.
A total of 5,053 and 5,081 slices of T1WI and T2WI, respectively were selected for the dataset of CT and MRI image pairs. Affine registration, image denoising, and contrast enhancement were done on the aforementioned multi-modality medical image dataset comprising T1WI, T2WI, and CT images of the brain. A deep CNN was then proposed by modifying the ResNet structure to constitute the encoder and decoder of U-Net, called double ResNet-U-Net (DRUNet). Three different loss functions were utilized to optimize the parameters of the proposed models: mean squared error (MSE) loss, binary crossentropy (BCE) loss, and contextual loss. Statistical analysis of the independent-sample -test was conducted by comparing DRUNets with different loss functions and different network layers.
DRUNet-101 with contextual loss yielded higher values of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Tenengrad function (i.e., 34.25±2.06, 0.97±0.03, and 17.03±2.75 for T1WI and 33.50±1.08, 0.98±0.05, and 19.76±3.54 for T2WI respectively). The results were statistically significant at P<0.001 with a narrow confidence interval of difference, indicating the superiority of DRUNet-101 with contextual loss. In addition, both image zooming and difference maps presented for the final synthetic MR images visually reflected the robustness of DRUNet-101 with contextual loss. The visualization of convolution filters and feature maps showed that the proposed model can generate synthetic MR images with high-frequency information.
The results demonstrated that DRUNet-101 with contextual loss function provided better high-frequency information in synthetic MR images compared with the other two functions. The proposed DRUNet model has a distinct advantage over previous models in terms of PSNR, SSIM, and Tenengrad score. Overall, DRUNet-101 with contextual loss is recommended for synthesizing MR images from CT scans.
从计算机断层扫描(CT)数据合成的磁共振成像(MRI)图像能够提供比单独CT数据更详细的病理结构信息;因此,MRI合成尤其在仅有CT图像可用的医学场景中受到了更多关注。提出了一种结合上下文损失函数的新型卷积神经网络(CNN),用于从CT数据合成T1加权图像(T1WI)和T2加权图像(T2WI)。
分别选择了5053片T1WI切片和5081片T2WI切片作为CT与MRI图像对的数据集。对上述包含脑T1WI、T2WI和CT图像的多模态医学图像数据集进行仿射配准、图像去噪和对比度增强。然后通过修改ResNet结构提出一种深度CNN,以构成U-Net的编码器和解码器,称为双ResNet-U-Net(DRUNet)。利用三种不同的损失函数来优化所提出模型的参数:均方误差(MSE)损失、二元交叉熵(BCE)损失和上下文损失。通过比较具有不同损失函数和不同网络层的DRUNet进行独立样本t检验的统计分析。
具有上下文损失的DRUNet-101产生了更高的峰值信噪比(PSNR)、结构相似性指数测量值(SSIM)和Tenengrad函数值(即T1WI分别为34.25±2.06、0.97±0.03和17.03±2.75,T2WI分别为33.50±1.08、0.98±0.05和19.76±3.54)。在P<0.001时结果具有统计学意义,差异置信区间较窄,表明具有上下文损失的DRUNet-101的优越性。此外,最终合成MR图像的图像缩放和差异图在视觉上都反映了具有上下文损失的DRUNet-101的稳健性。卷积滤波器和特征图的可视化表明所提出的模型能够生成具有高频信息的合成MR图像。
结果表明,与其他两种函数相比,具有上下文损失函数的DRUNet-101在合成MR图像中提供了更好的高频信息。所提出的DRUNet模型在PSNR、SSIM和Tenengrad分数方面比先前模型具有明显优势。总体而言,建议使用具有上下文损失的DRUNet-101从CT扫描合成MR图像。