Suppr超能文献

将跨模态幻觉磁共振成像与计算机断层扫描相结合以辅助纵隔肺肿瘤分割。

Integrating cross-modality hallucinated MRI with CT to aid mediastinal lung tumor segmentation.

作者信息

Jue Jiang, Jason Hu, Neelam Tyagi, Andreas Rimner, Sean Berry L, Joseph Deasy O, Harini Veeraraghavan

机构信息

Medical Physics, Memorial Sloan Kettering Cancer Center.

Radiation Oncology, Memorial Sloan Kettering Cancer Center.

出版信息

Med Image Comput Comput Assist Interv. 2019 Oct;11769:221-229. doi: 10.1007/978-3-030-32226-7_25. Epub 2019 Oct 10.

Abstract

Lung tumors, especially those located close to or surrounded by soft tissues like the mediastinum, are difficult to segment due to the low soft tissue contrast on computed tomography images. Magnetic resonance images contain superior soft-tissue contrast information that can be leveraged if both modalities were available for training. Therefore, we developed a cross-modality educed learning approach where MR information that is educed from CT is used to hallucinate MRI and improve CT segmentation. Our approach, called cross-modality educed deep learning segmentation (CMEDL) combines CT and pseudo MR produced from CT by aligning their features to obtain segmentation on CT. Features computed in the last two layers of parallelly trained CT and MR segmentation networks are aligned. We implemented this approach on U-net and dense fully convolutional networks (dense-FCN). Our networks were trained on unrelated cohorts from open-source the Cancer Imaging Archive CT images (N=377), an internal archive T2-weighted MR (N=81), and evaluated using separate validation (N=304) and testing (N=333) CT-delineated tumors. Our approach using both networks were significantly more accurate (U-net < 0.001; denseFCN < 0.001) than CT-only networks and achieved an accuracy (Dice similarity coefficient) of 0.71±0.15 (U-net), 0.74±0.12 (denseFCN) on validation and 0.72±0.14 (U-net), 0.73±0.12 (denseFCN) on the testing sets. Our novel approach demonstrated that educing cross-modality information through learned priors enhances CT segmentation performance.

摘要

肺部肿瘤,尤其是那些位于靠近纵隔或被纵隔等软组织包围的肿瘤,由于计算机断层扫描(CT)图像上软组织对比度低,难以进行分割。磁共振成像(MRI)包含 superior 的软组织对比信息,如果两种模态都可用于训练,则可以利用这些信息。因此,我们开发了一种跨模态导出学习方法,其中从CT导出的MR信息用于生成MRI并改善CT分割。我们的方法称为跨模态导出深度学习分割(CMEDL),它通过对齐CT和由CT生成的伪MR的特征来获得CT上的分割,从而将CT和伪MR结合起来。在并行训练的CT和MR分割网络的最后两层中计算的特征进行对齐。我们在U-net和密集全卷积网络(dense-FCN)上实现了这种方法。我们的网络在来自开源癌症成像存档CT图像(N = 377)、内部存档T2加权MR(N = 81)的不相关队列上进行训练,并使用单独的验证(N = 304)和测试(N = 333)CT勾勒的肿瘤进行评估。我们使用这两种网络的方法比仅使用CT的网络具有更高的准确性(U-net < 0.001;denseFCN < 0.001),在验证集上的准确率(Dice相似系数)为0.71±0.15(U-net)、0.74±0.12(denseFCN),在测试集上为0.72±0.14(U-net)、0.73±0.12(denseFCN)。我们的新方法表明,通过学习先验导出跨模态信息可提高CT分割性能。

相似文献

引用本文的文献

3
CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation.CellTranspose:用于细胞实例分割的少样本域适应
IEEE Winter Conf Appl Comput Vis. 2023 Jan;2023:455-466. doi: 10.1109/wacv56688.2023.00053. Epub 2023 Feb 6.
9
Domain Adaptation for Medical Image Analysis: A Survey.医学图像分析中的域自适应:综述。
IEEE Trans Biomed Eng. 2022 Mar;69(3):1173-1185. doi: 10.1109/TBME.2021.3117407. Epub 2022 Feb 18.

本文引用的文献

3
Medical Image Synthesis with Context-Aware Generative Adversarial Networks.基于上下文感知生成对抗网络的医学图像合成
Med Image Comput Comput Assist Interv. 2017 Sep;10435:417-425. doi: 10.1007/978-3-319-66179-7_48. Epub 2017 Sep 4.
5
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).多模态脑肿瘤图像分割基准(BRATS)。
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.
8
MRI simulation for radiotherapy treatment planning.磁共振成像模拟放疗计划。
Med Phys. 2012 Nov;39(11):6701-11. doi: 10.1118/1.4758068.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验