Suppr超能文献

深度学习合成 CT 图像从磁共振图像:通过迁移学习实现不同数据集的模型泛化。

Synthesizing CT images from MR images with deep learning: model generalization for different datasets through transfer learning.

机构信息

Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America.

Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China.

出版信息

Biomed Phys Eng Express. 2021 Feb 24;7(2). doi: 10.1088/2057-1976/abe3a7.

Abstract

Replacing CT imaging with MR imaging for MR-only radiotherapy has sparked the interest of many scientists and is being increasingly adopted in radiation oncology. Although many studies have focused on generating CT images from MR images, only models on data with the same dataset were tested. Therefore, how well the trained model will work for data from different hospitals and MR protocols is still unknown. In this study, we addressed the model generalization problem for the MR-to-CT conversion task.Brain T2 MR and corresponding CT images were collected from SZSPH (source domain dataset), brain T1-FLAIR, T1-POST MR, and corresponding CT images were collected from The University of Texas Southwestern (UTSW) (target domain dataset). To investigate the model's generalizability ability, four potential solutions were proposed: source model, target model, combined model, and adapted model. All models were trained using the CycleGAN network. The source model was trained with a source domain dataset from scratch and tested with a target domain dataset. The target model was trained with a target domain dataset and tested with a target domain dataset. The combined model was trained with both source domain and target domain datasets, and tested with the target domain dataset. The adapted model used a transfer learning strategy to train a CycleGAN model with a source domain dataset and retrain the pre-trained model with a target domain dataset. MAE, RMSE, PSNR, and SSIM were used to quantitatively evaluate model performance on a target domain dataset.The adapted model achieved best quantitative results of 74.56 ± 8.61, 193.18 ± 17.98, 28.30 ± 0.83, and 0.84 ± 0.01 for MAE, RMSE, PSNR, and SSIM using the T1-FLAIR dataset and 74.89 ± 15.64, 195.73 ± 31.29, 27.72 ± 1.43, and 0.83 ± 0.04 for MAE, RMSE, PSNR, and SSIM using the T1-POST dataset. The source model had the poorest performance.This work indicates high generalization ability to generate synthetic CT images from small training datasets of MR images using pre-trained CycleGAN. The quantitative results of the test data, including different scanning protocols and different acquisition centers, indicated the proof of this concept.

摘要

用磁共振成像(MR)替代 CT 成像进行仅磁共振放疗已经引起了许多科学家的兴趣,并在放射肿瘤学中得到了越来越多的应用。虽然许多研究都集中在从 MR 图像生成 CT 图像上,但这些研究仅在具有相同数据集的模型上进行了测试。因此,训练好的模型在来自不同医院和不同 MR 协议的数据上的表现如何仍然未知。在这项研究中,我们解决了用于 MR 到 CT 转换任务的模型泛化问题。我们从 SZSPH(源域数据集)收集了脑 T2 MR 和相应的 CT 图像,从德克萨斯大学西南医学中心(UTSW)(目标域数据集)收集了脑 T1-FLAIR、T1-POST MR 和相应的 CT 图像。为了研究模型的泛化能力,我们提出了四种潜在的解决方案:源模型、目标模型、组合模型和自适应模型。所有模型都使用 CycleGAN 网络进行训练。源模型是从零开始用源域数据集训练的,并在目标域数据集上进行测试。目标模型是用目标域数据集训练的,并在目标域数据集上进行测试。组合模型是用源域和目标域数据集训练的,并在目标域数据集上进行测试。自适应模型使用迁移学习策略,用源域数据集训练 CycleGAN 模型,并使用目标域数据集重新训练预训练模型。MAE、RMSE、PSNR 和 SSIM 用于定量评估模型在目标域数据集上的性能。使用 T1-FLAIR 数据集,自适应模型在 MAE、RMSE、PSNR 和 SSIM 上取得了最佳的定量结果,分别为 74.56±8.61、193.18±17.98、28.30±0.83 和 0.84±0.01,使用 T1-POST 数据集,自适应模型在 MAE、RMSE、PSNR 和 SSIM 上取得了最佳的定量结果,分别为 74.89±15.64、195.73±31.29、27.72±1.43 和 0.83±0.04。源模型的性能最差。这项工作表明,使用预先训练的 CycleGAN 从少量磁共振图像训练数据集生成合成 CT 图像具有很高的泛化能力。来自不同扫描协议和不同采集中心的测试数据的定量结果证明了这一概念。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验