Sun Bin, Jia Shuangfu, Jiang Xiling, Jia Fucang
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
Int J Comput Assist Radiol Surg. 2023 Jan;18(1):149-156. doi: 10.1007/s11548-022-02732-x. Epub 2022 Aug 19.
CycleGAN and its variants are widely used in medical image synthesis, which can use unpaired data for medical image synthesis. The most commonly used method is to use a Generative Adversarial Network (GAN) model to process 2D slices and thereafter concatenate all of these slices to 3D medical images. Nevertheless, these methods always bring about spatial inconsistencies in contiguous slices. We offer a new model based on the CycleGAN to work out this problem, which can achieve high-quality conversion from magnetic resonance (MR) to computed tomography (CT) images.
To achieve spatial consistencies of 3D medical images and avoid the memory-heavy 3D convolutions, we reorganized the adjacent 3 slices into a 2.5D slice as the input image. Further, we propose a U-Net discriminator network to improve accuracy, which can perceive input objects locally and globally. Then, the model uses Content-Aware ReAssembly of Features (CARAFE) upsampling, which has a large field of view and content awareness takes the place of using a settled kernel for all samples.
The mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for double U-Net CycleGAN generated 3D image synthesis are 74.56±10.02, 27.12±0.71 and 0.84±0.03, respectively. Our method achieves preferable results than state-of-the-art methods.
The experiment results indicate our method can realize the conversion of MR to CT images using ill-sorted pair data, and achieves preferable results than state-of-the-art methods. Compared with 3D CycleGAN, it can synthesize better 3D CT images with less computation and memory.
循环生成对抗网络(CycleGAN)及其变体在医学图像合成中被广泛应用,其能够使用未配对数据进行医学图像合成。最常用的方法是使用生成对抗网络(GAN)模型处理二维切片,然后将所有这些切片拼接成三维医学图像。然而,这些方法总是会在相邻切片中产生空间不一致性。我们提出了一种基于CycleGAN的新模型来解决这个问题,该模型能够实现从磁共振(MR)图像到计算机断层扫描(CT)图像的高质量转换。
为了实现三维医学图像的空间一致性并避免内存消耗大的三维卷积,我们将相邻的3个切片重新组织成一个2.5维切片作为输入图像。此外,我们提出了一种U-Net判别器网络来提高准确性,该网络能够局部和全局地感知输入对象。然后,该模型使用具有大视野和内容感知的内容感知特征重组(CARAFE)上采样,取代了对所有样本使用固定内核的做法。
双U-Net CycleGAN生成的三维图像合成的平均绝对误差(MAE)、峰值信噪比(PSNR)和结构相似性指数测量(SSIM)分别为74.56±10.02、27.12±0.71和0.84±0.03。我们的方法比现有方法取得了更好的结果。
实验结果表明,我们的方法能够使用排序不佳的配对数据实现从MR图像到CT图像的转换,并且比现有方法取得了更好的结果。与三维CycleGAN相比,它能够以更少的计算和内存合成更好的三维CT图像。