IEEE Trans Biomed Eng. 2018 Dec;65(12):2720-2730. doi: 10.1109/TBME.2018.2814538. Epub 2018 Mar 9.
Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
医学成像在各种临床应用中起着至关重要的作用。然而,由于成本和辐射剂量等多种因素的考虑,某些成像方式的获取可能受到限制。因此,通过估计所需的成像方式而无需实际扫描,可以极大地受益于医学图像合成。在本文中,我们提出了一种生成对抗方法来解决这个具有挑战性的问题。具体来说,我们训练了一个全卷积网络(FCN),以便在给定源图像的情况下生成目标图像。为了更好地模拟从源到目标的非线性映射,并生成更逼真的目标图像,我们建议使用对抗学习策略来更好地对 FCN 进行建模。此外,FCN 被设计为包含基于图像梯度差的损失函数,以避免生成模糊的目标图像。还探索了长期残差单元,以帮助网络的训练。我们进一步应用自动上下文模型来实现上下文感知的深度卷积对抗网络。实验结果表明,我们的方法在从相应的源图像合成目标图像方面是准确和鲁棒的。特别是,我们在三个数据集上评估了我们的方法,以解决从 MRI 生成 CT 和从 3T MRI 生成 7T MRI 的任务。与比较的现有方法相比,我们的方法在所有数据集和任务中都表现出色。