Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA.
International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA.
Magn Reson Med. 2023 Nov;90(5):2116-2129. doi: 10.1002/mrm.29766. Epub 2023 Jun 18.
This work was aimed at proposing a supervised learning-based method that directly synthesizes contrast-weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin-dynamics simulations.
To implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi-branch U-Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N-DCSNet. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others.
In-vivo experiments demonstrated excellent image quality with respect to that of simulation-based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in-flow and spiral off-resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo-based contrast-weighted images.
We present N-DCSNet to directly synthesize high-fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast-weighted images, our method does not require any model-based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:https://github.com/mikgroup/DCSNet).
本研究旨在提出一种基于监督学习的方法,无需进行定量映射和自旋动力学模拟,即可直接从磁共振指纹图(MRF)数据合成对比加权图像。
为了实现我们的直接对比合成(DCS)方法,我们采用了具有多分支 U-Net 作为生成器和多层 CNN(PatchGAN)作为判别器的条件生成对抗网络(GAN)框架。我们将提出的方法称为 N-DCSNet。通过对配对的 MRF 和目标自旋回波对比加权扫描进行有监督训练,使用输入的 MRF 数据直接合成 T1 加权、T2 加权和液体衰减反转恢复(FLAIR)图像。在来自健康志愿者的体内 MRF 扫描上展示了我们提出的方法的性能。使用归一化均方根误差(nRMSE)、峰值信噪比(PSNR)、结构相似性(SSIM)、学习感知图像补丁相似性(LPIPS)和 Fréchet inception 距离(FID)等定量指标来评估该方法的性能,并与其他方法进行比较。
体内实验表明,与基于模拟的对比合成和以前的 DCS 方法相比,该方法在图像质量上具有出色的表现,无论是在视觉上还是在定量指标上。我们还展示了一些案例,我们训练的模型能够减轻 MRF 重建中常见的流入和螺旋离共振伪影,从而更忠实地表示传统的基于自旋回波的对比加权图像。
我们提出了 N-DCSNet,可直接从单次 MRF 采集生成高保真度的多对比度磁共振图像。这种方法可以显著减少检查时间。通过直接训练网络生成对比加权图像,我们的方法不需要任何基于模型的模拟,因此可以避免由于字典匹配和对比模拟而导致的重建误差(代码可在:https://github.com/mikgroup/DCSNet 获得)。