Liu Mengting, Maiti Piyush, Thomopoulos Sophia, Zhu Alyssa, Chai Yaqiong, Kim Hosung, Jahanshad Neda
USC Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, University of Southern California, Los Angeles, CA, USA.
Med Image Comput Comput Assist Interv. 2021 Sep-Oct;12903:313-322. doi: 10.1007/978-3-030-87199-4_30. Epub 2021 Sep 21.
Large data initiatives and high-powered brain imaging analyses require the pooling of MR images acquired across multiple scanners, often using different protocols. Prospective cross-site harmonization often involves the use of a phantom or traveling subjects. However, as more datasets are becoming publicly available, there is a growing need for retrospective harmonization, pooling data from sites not originally coordinated together. Several retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most unsupervised methods cannot distinguish between image-acquisition based variability and cross-site population variability, so they require that datasets contain subjects or patient groups with similar clinical or demographic information. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a reference image directly, without knowing their site/scanner labels . We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, successfully, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. Moreover, we further demonstrated that if we included diverse enough images into the training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies.
大型数据计划和高分辨率脑成像分析需要汇集通过多个扫描仪采集的磁共振图像,这些扫描仪通常使用不同的协议。前瞻性跨站点协调通常涉及使用体模或流动受试者。然而,随着越来越多的数据集公开可用,对回顾性协调的需求也在增加,即将原本未协同的站点的数据汇集起来。几种回顾性协调技术已显示出消除跨站点图像差异的前景。然而,大多数无监督方法无法区分基于图像采集的变异性和跨站点人群变异性,因此它们要求数据集中包含具有相似临床或人口统计学信息的受试者或患者组。为克服这一局限性,我们将跨站点磁共振图像协调视为一个风格迁移问题,而非域迁移问题。使用基于生成对抗网络(GAN)的完全无监督深度学习框架,我们表明磁共振图像可以通过直接插入从参考图像编码的风格信息来进行协调,而无需知道它们的站点/扫描仪标签。我们使用来自五个具有不同人口统计学特征的大规模多站点数据集的数据训练了我们的模型。结果表明,我们的风格编码模型可以成功地协调磁共振图像并匹配强度分布,而无需依赖流动受试者。该模型还避免了控制临床、诊断或人口统计学信息的需求。此外,我们进一步证明,如果我们在训练集中纳入足够多样的图像,我们的方法可以成功地协调从未见过的扫描仪和协议采集的磁共振图像,这表明该方法是正在进行的合作研究中一种很有前景的新型工具。