IEEE Trans Med Imaging. 2022 Dec;41(12):3895-3906. doi: 10.1109/TMI.2022.3199155. Epub 2022 Dec 2.
Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.
基于学习的 MRI 对比度转换涉及使用高质量的源和目标对比度图像进行监督深度学习模型训练,这些图像可能由于扫描成本或时间的限制而难以采集。为了方便训练集的整理,我们在这里引入了第一个用于 MRI 对比度转换的半监督模型(ssGAN),它可以直接使用欠采样 k 空间数据进行训练。为了在欠采样数据上实现半监督学习,ssGAN 在图像、k 空间和对抗性领域引入了新的多线圈损失。与传统的单线圈合成模型中的损失不同,多线圈损失是选择性地施加在采集的 k 空间样本上的。我们在回顾性欠采样多对比度脑 MRI 数据集上进行了全面的实验。我们的结果表明,ssGAN 的性能与监督模型相当,同时优于基于线圈组合幅度图像训练的单线圈模型。它还优于级联重建-合成模型,在该模型中,在对欠采样数据进行自监督重建后,训练一个监督合成模型。因此,ssGAN 有望提高基于学习的多对比度 MRI 合成的可行性。