Suppr超能文献

使用生成对抗网络将3特斯拉磁共振成像跨模态转换为7特斯拉磁共振成像。

Cross-Modality Image Translation of 3 Tesla Magnetic Resonance Imaging to 7 Tesla Using Generative Adversarial Networks.

作者信息

Diniz Eduardo, Santini Tales, Karim Helmet, Aizenstein Howard J, Ibrahim Tamer S

机构信息

Department of Psychology, Carnegie Mellon University, Pennsylvania, USA.

Department of Bioengineering, University of Pittsburgh, Pennsylvania, USA.

出版信息

Hum Brain Mapp. 2025 Jun 15;46(9):e70246. doi: 10.1002/hbm.70246.

Abstract

The rapid advancements in magnetic resonance imaging (MRI) technology have precipitated a new paradigm wherein cross-modality data translation across diverse imaging platforms, field strengths, and different sites is increasingly challenging. This issue is particularly accentuated when transitioning from 3 Tesla (3T) to 7 Tesla (7T) MRI systems. This study proposes a novel solution to these challenges using generative adversarial networks (GANs)-specifically, the CycleGAN architecture-to create synthetic 7T images from 3T data. Employing a dataset of 1112 and 490 unpaired 3T and 7T MR images, respectively, we trained a 2-dimensional (2D) CycleGAN model, evaluating its performance on a paired dataset of 22 participants scanned at 3T and 7T. Independent testing on 22 distinct participants affirmed the model's proficiency in accurately predicting various tissue types, encompassing cerebral spinal fluid, gray matter, and white matter. Our approach provides a reliable and efficient methodology for synthesizing 7T images, achieving a median Dice coefficient of 83.62% for cerebral spinal fluid (CSF), 81.42% for gray matter (GM), and 89.75% for White Matter (WM), while the corresponding median Percentual Area Differences (PAD) were 6.82%, 7.63%, and 4.85% for CSF, GM, and WM, respectively, in the testing dataset, thereby aiding in harmonizing heterogeneous datasets. Furthermore, it delineates the potential of GANs in amplifying the contrast-to-noise ratio (CNR) from 3T, potentially enhancing the diagnostic capability of the images. While acknowledging the risk of model overfitting, our research underscores a promising progression toward harnessing the benefits of 7T MR systems in research investigations while preserving compatibility with existing 3T MR data. This work was previously presented at the ISMRM 2021 conference.

摘要

磁共振成像(MRI)技术的快速发展催生了一种新的模式,即在不同成像平台、场强和不同地点之间进行跨模态数据转换变得越来越具有挑战性。当从3特斯拉(3T)MRI系统过渡到7特斯拉(7T)MRI系统时,这个问题尤为突出。本研究提出了一种新颖的解决方案来应对这些挑战,即使用生成对抗网络(GAN)——具体来说,是CycleGAN架构——从3T数据创建合成7T图像。我们分别使用包含1112张和490张未配对的3T和7T MR图像的数据集,训练了一个二维(2D)CycleGAN模型,并在对22名在3T和7T进行扫描的参与者的配对数据集上评估其性能。对另外22名不同参与者的独立测试证实了该模型在准确预测各种组织类型方面的能力,包括脑脊液、灰质和白质。我们的方法为合成7T图像提供了一种可靠且高效的方法,在测试数据集中,脑脊液(CSF)的中位骰子系数为83.62%,灰质(GM)为81.42%,白质(WM)为89.75%,而相应的中位百分比面积差异(PAD)对于CSF、GM和WM分别为6.82%、7.63%和4.85%,从而有助于协调异构数据集。此外,它还描绘了GAN在提高3T图像的对比度噪声比(CNR)方面的潜力,有可能增强图像的诊断能力。虽然认识到模型过度拟合的风险,但我们的研究强调了在利用7T MR系统在研究调查中的优势同时保持与现有3T MR数据兼容性方面取得的有希望的进展。这项工作之前已在2021年国际磁共振医学学会(ISMRM)会议上发表。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08b6/12182983/83da14dfe776/HBM-46-e70246-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验