Remedios Samuel W, Dewey Blake E, Carass Aaron, Pham Dzung L, Prince Jerry L
Department of Computer Science, Johns Hopkins University, Baltimore, MD 21286, USA.
Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892, USA.
Proc SPIE Int Soc Opt Eng. 2023 Feb;12464. doi: 10.1117/12.2654032. Epub 2023 Apr 3.
Generative priors for magnetic resonance (MR) images have been used in a number of medical image analysis applications. Due to the plethora of deep learning methods based on 2D medical images, it would be beneficial to have a generator trained on complete, high-resolution 2D head MR slices from multiple orientations and multiple contrasts. In this work, we trained a StyleGAN3-T model for head MR slices for T and T-weighted contrasts on public data. We restricted the training corpus of this model to slices from 1mm isotropic volumes corresponding to three standard radiological views with set pre-processing steps. In order to retain full applicability to downstream tasks, we did not skull-strip the images. Several analyses of the trained network, including examination of qualitative samples, interpolation of latent codes, and style mixing, demonstrate the expressivity of the network. Images from this network can be used for a variety of downstream tasks. The weights are open-sourced and are available at https://gitlab.com/iacl/high-res-mri-head-slice-gan.
磁共振(MR)图像的生成先验已被用于许多医学图像分析应用中。由于基于二维医学图像的深度学习方法众多,因此训练一个基于多个方向和多个对比度的完整高分辨率二维头部MR切片的生成器将是有益的。在这项工作中,我们在公共数据上针对T和T加权对比度的头部MR切片训练了一个StyleGAN3-T模型。我们将该模型的训练语料库限制为来自对应于三个标准放射学视图的1mm各向同性体积的切片,并设置了预处理步骤。为了保持对下游任务的完全适用性,我们没有对图像进行去颅骨处理。对训练后的网络进行的几次分析,包括定性样本检查、潜在代码插值和风格混合,证明了网络的表现力。该网络生成的图像可用于各种下游任务。权重已开源,可在https://gitlab.com/iacl/high-res-mri-head-slice-gan获取。