IEEE Trans Med Imaging. 2020 Jul;39(7):2339-2350. doi: 10.1109/TMI.2020.2969630. Epub 2020 Jan 27.
Generative adversarial network (GAN) has been widely explored for cross-modality medical image synthesis. The existing GAN models usually adversarially learn a global sample space mapping from the source-modality to the target-modality and then indiscriminately apply this mapping to all samples in the whole space for prediction. However, due to the scarcity of training samples in contrast to the complicated nature of medical image synthesis, learning a single global sample space mapping that is "optimal" to all samples is very challenging, if not intractable. To address this issue, this paper proposes sample-adaptive GAN models, which not only cater for the global sample space mapping between the source- and the target-modalities but also explore the local space around each given sample to extract its unique characteristic. Specifically, the proposed sample-adaptive GANs decompose the entire learning model into two cooperative paths. The baseline path learns a common GAN model by fitting all the training samples as usual for the global sample space mapping. The new sample-adaptive path additionally models each sample by learning its relationship with its neighboring training samples and using the target-modality features of these training samples as auxiliary information for synthesis. Enhanced by this sample-adaptive path, the proposed sample-adaptive GANs are able to flexibly adjust themselves to different samples, and therefore optimize the synthesis performance. Our models have been verified on three cross-modality MR image synthesis tasks from two public datasets, and they significantly outperform the state-of-the-art methods in comparison. Moreover, the experiment also indicates that our sample-adaptive strategy could be utilized to improve various backbone GAN models. It complements the existing GANs models and can be readily integrated when needed.
生成对抗网络 (GAN) 已被广泛用于跨模态医学图像合成。现有的 GAN 模型通常通过对抗学习从源模态到目标模态的全局样本空间映射,然后不分青红皂白地将此映射应用于整个空间中的所有样本进行预测。然而,由于训练样本的稀缺性与医学图像合成的复杂性相比,学习一个对所有样本都是“最优”的单一全局样本空间映射是非常具有挑战性的,如果不是无法解决的话。为了解决这个问题,本文提出了样本自适应 GAN 模型,不仅可以适应源模态和目标模态之间的全局样本空间映射,还可以探索每个给定样本周围的局部空间,以提取其独特的特征。具体来说,所提出的样本自适应 GAN 将整个学习模型分解为两个协作路径。基线路径通过像往常一样拟合所有训练样本来学习常见的 GAN 模型,用于全局样本空间映射。新的样本自适应路径通过学习每个样本与其相邻训练样本的关系并使用这些训练样本的目标模态特征作为合成的辅助信息来对每个样本进行建模。通过这种样本自适应路径增强,所提出的样本自适应 GAN 能够灵活地适应不同的样本,从而优化合成性能。我们的模型已经在来自两个公共数据集的三个跨模态 MR 图像合成任务上进行了验证,与最先进的方法相比,它们的性能显著提高。此外,实验还表明,我们的样本自适应策略可用于改进各种骨干 GAN 模型。它补充了现有的 GAN 模型,并且在需要时可以很容易地集成。