Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany.
Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany.
Int J Comput Assist Radiol Surg. 2021 Dec;16(12):2069-2078. doi: 10.1007/s11548-021-02433-x. Epub 2021 Jun 20.
A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required.
Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on.
This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.
Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images.
磁共振成像(MRI)检查通常由几个序列组成,这些序列产生不同的图像对比。每个序列都通过多个影响图像对比度、信噪比、采集时间和/或分辨率的采集参数进行参数化。根据临床指征,放射科医生需要不同的对比来做出诊断。由于 MR 序列采集时间长,并且采集的图像可能会因运动而损坏,因此需要一种具有可调对比度特性的 MR 图像合成方法。
因此,我们基于重复时间和回波时间的 MR 采集参数训练了一个图像到图像的生成对抗网络。我们的方法受到风格转换网络的启发,而在我们的情况下,图像的“风格”是明确给出的,因为它是由我们的网络所依赖的 MR 采集参数决定的。
这使我们能够合成具有可调图像对比度的 MR 图像。我们在 fastMRI 数据集上评估了我们的方法,该数据集是一组公开的大型膝关节 MR 图像,结果表明,与基准 pix2pix 方法相比,我们的方法在将非脂肪饱和 MR 图像转换为脂肪饱和图像方面表现更好。我们的方法在峰值信噪比和结构相似性方面的得分分别为 24.48 和 0.66,显著优于 pix2pix 基准模型。
我们的模型是第一个能够进行精细对比度合成的模型,可用于合成缺失的 MR 对比度,或作为 AI 训练在 MRI 中的数据增强技术。它还可以用作医学成像中其他图像到图像转换任务的基础,例如,增强模态间转换(MRI→CT)或从 3T MR 图像合成 7T 图像。