IEEE Trans Med Imaging. 2019 Jul;38(7):1750-1762. doi: 10.1109/TMI.2019.2895894. Epub 2019 Jan 29.
Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes.
磁共振(MR)成像是一种广泛使用的医学成像协议,可以配置为提供人体组织之间的不同对比度。通过设置不同的扫描参数,每种 MR 成像方式反映了被扫描身体部位的独特视觉特征,从多个角度受益于后续分析。为了利用来自多种成像方式的互补信息,跨模态 MR 图像合成最近引起了越来越多的研究兴趣。然而,大多数现有方法仅关注最小化像素/体素级别的强度差异,但忽略了图像内容结构的纹理细节,这会影响合成图像的质量。在本文中,我们提出了边缘感知生成对抗网络(Ea-GANs)用于跨模态 MR 图像合成。具体来说,我们整合了边缘信息,它反映了图像内容的纹理结构,并描绘了图像中不同对象的边界,以缩小这种差距。对应于不同的学习策略,提出了两种框架,即生成器诱导的 Ea-GAN(gEa-GAN)和鉴别器诱导的 Ea-GAN(dEa-GAN)。gEa-GAN 通过其生成器来结合边缘信息,而 dEa-GAN 则进一步从生成器和鉴别器来做到这一点,从而使边缘相似性也可以进行对抗性学习。此外,所提出的 Ea-GAN 是基于 3D 的,并利用分层特征来捕获上下文信息。实验结果表明,所提出的 Ea-GAN 特别是 dEa-GAN,在跨模态 MR 图像合成的定性和定量度量方面,优于多种最先进的方法。此外,dEa-GAN 还在关于正面、地图和城市景观的基准数据集上的通用图像合成任务中表现出出色的通用性。