Dai Xianjin, Lei Yang, Fu Yabo, Curran Walter J, Liu Tian, Mao Hui, Yang Xiaofeng
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
Med Phys. 2020 Dec;47(12):6343-6354. doi: 10.1002/mp.14539. Epub 2020 Oct 27.
Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis.
A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE).
The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively.
We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
从组织的多个对比度中获取的补充信息有助于医生评估、诊断和规划各种疾病的治疗方案。然而,使用多个脉冲序列为每个患者获取多个对比度的磁共振图像(MRI)既耗时又昂贵,在此情况下,医学图像合成已被证明是一种有效的替代方法。本研究的目的是开发一个用于多模态MR图像合成的统一框架。
开发了一个仅由单个生成器和单个判别器组成的统一生成对抗网络,以学习四种不同模态图像之间的映射。生成器将一幅图像及其模态标签作为输入,并学习合成目标模态的图像,而判别器则经过训练以区分真实图像和合成图像,并将它们分类到相应的模态。使用由四种不同对比度组成的多模态脑MRI对该网络进行训练和测试,这四种对比度分别是T1加权(T1)、T1加权和对比增强(T1c)、T2加权(T2)以及液体衰减反转恢复(Flair)。通过计算归一化平均绝对误差(NMAE)、峰值信噪比(PSNR)、结构相似性指数测量(SSIM)、视觉信息保真度(VIF)和自然度图像质量评估器(NIQE)对我们提出的方法进行定量评估。
所提出的模型在一组274例具有良好对齐的多种类型MRI扫描的胶质瘤患者中进行训练和测试。在模型训练完成后,分别使用T1、T1c、T2、Flair中的每一个作为单个输入模态来生成其各自的其余模态进行测试。我们提出的方法对于以数据库中可用的任意MRI模态作为输入进行图像合成显示出高准确性和鲁棒性。例如,以T1作为输入模态时,生成的T1c、T2、Flair的NMAE分别为0.034±0.005、0.041±0.006和0.041±0.006,PSNR分别为32.353±2.525dB、30.016±2.577dB和29.091±2.795dB,SSIM分别为0.974±0.059、0.969±0.059和0.959±0.059,VIF分别为0.750±0.087、0.706±0.097和0.654±0.062,NIQE分别为1.396±0.401、1.511±0.460和1.259±0.358。
我们提出了一种基于统一生成对抗网络的新型多模态MR图像合成方法。该网络将一幅图像及其模态标签作为输入,并在单次前向传播中合成多模态图像。结果表明,所提出的方法能够从单个MR图像准确合成多模态MR图像。