• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过泛化利用CT图像辅助MR图像分割:肾癌患者肾实质的分割

Using CT images to assist the segmentation of MR images via generalization: Segmentation of the renal parenchyma of renal carcinoma patients.

作者信息

Yu Zhengyang, Zhao Tongtong, Xi Zuqiang, Zhang Yaofeng, Zhang Xiaodong, Wang Xiaoying

机构信息

Department of Radiology, Peking University First Hospital, Beijing, China.

Beijing Smart Tree Medical Technology Co., Ltd., Beijing, China.

出版信息

Med Phys. 2025 Feb;52(2):951-964. doi: 10.1002/mp.17494. Epub 2024 Nov 4.

DOI:10.1002/mp.17494
PMID:39494916
Abstract

BACKGROUND

Developing deep learning models for segmenting medical images in multiple modalities with less data and annotation is an attractive and challenging task, which was previously discussed as being accomplished by complex external frameworks for bridging the gap between different modalities. Exploring the generalization ability of networks in medical images in different modalities could provide more simple and accessible methods, yet comprehensive testing could still be needed.

PURPOSE

To explore the feasibility and robustness of using computed tomography (CT) images to assist the segmentation of magnetic resonance (MR) images via the generalization, in the segmentation of renal parenchyma of renal cell carcinoma (RCC) patients.

METHODS

Nephrographic CT images and fat-suppressed T2-weighted (fs-T2 W) images were retrospectively collected. The pure CT dataset included 116 CT images. Additionally, 240 MR images were randomly divided into subsets A and B. From subset A, three training datasets were constructed, each containing 40, 80, and 120 images, respectively. Similarly, three datasets were constructed from subset B. Subsequently, datasets with mixed modality were created by combining these pure MR datasets with the 116 CT images. The 3D-UNET models for segmenting the renal parenchyma in two steps were trained using these 13 datasets: segmenting kidneys and then the renal parenchyma. These models were evaluated in internal MR (n = 120), CT (n = 65) validation datasets, and an external validation dataset of CT (n = 79), using the mean of the dice similarity coefficient (DSC). To demonstrate the robustness of generalization ability over different proportions of modalities, we compared the models trained with mixed modality in three different proportions and pure MR, using repeated measures analysis of variance (RM-ANOVA). We developed a renal parenchyma volume quantification tool by the trained models. The mean differences and Pearson correlation coefficients between the model segmentation volume and the ground truth segmentation volume were calculated for its evaluation.

RESULTS

The mean DSCs of models trained with 116 data in CT in the validation of MR were 0.826, 0.842, and 0.953, respectively, for the predictions of kidney segmentation model on whole image, renal parenchymal segmentation model on kidneys with RCC and without RCC. For all models trained with mixed modality, the means of DSC were above 0.9, in all validations of CT and MR. According to the results of the comparison between models trained with mixed modality and pure MR, the means of DSC of the former were significantly greater or equal to the latter, at all three different proportions of modalities. The differences of volumes were all significantly lower than one-third of the volumetric quantification error of a previous method, and the Pearson correlation coefficients of volumes were all above 0.96 on kidneys with and without RCC of three validations.

CONCLUSION

CT images could be used to assist the segmentation of MR images via the generalization, with or without the supervision of MR data. This ability showed acceptable robustness. A tool for accurately measuring renal parenchymal volume on CT and MR images was established.

摘要

背景

利用较少的数据和标注来开发用于分割多种模态医学图像的深度学习模型是一项具有吸引力且具有挑战性的任务,此前讨论的是通过复杂的外部框架来弥合不同模态之间的差距以完成该任务。探索不同模态下医学图像中网络的泛化能力可以提供更简单且易于使用的方法,但仍可能需要全面的测试。

目的

通过在肾细胞癌(RCC)患者肾实质分割中进行泛化,探讨使用计算机断层扫描(CT)图像辅助磁共振(MR)图像分割的可行性和稳健性。

方法

回顾性收集肾实质期CT图像和脂肪抑制T2加权(fs-T2 W)图像。纯CT数据集包含116幅CT图像。此外,240幅MR图像被随机分为子集A和B。从子集A中构建了三个训练数据集,每个分别包含40、80和120幅图像。同样,从子集B中构建了三个数据集。随后,通过将这些纯MR数据集与116幅CT图像相结合创建了混合模态数据集。使用这13个数据集训练用于分两步分割肾实质的3D-UNET模型:先分割肾脏,然后分割肾实质。这些模型在内部MR(n = 120)、CT(n = 65)验证数据集以及外部CT(n = 79)验证数据集中进行评估,使用骰子相似系数(DSC)的均值。为了证明在不同模态比例下泛化能力的稳健性,我们使用重复测量方差分析(RM-ANOVA)比较了以三种不同比例的混合模态和纯MR训练的模型。我们通过训练的模型开发了一种肾实质体积量化工具。计算模型分割体积与真实分割体积之间的平均差异和皮尔逊相关系数以进行评估。

结果

在MR验证中,以CT中的116个数据训练的模型,对于全图像肾脏分割模型、有RCC和无RCC肾脏的肾实质分割模型,其DSC均值分别为0.826、0.842和0.953。对于所有以混合模态训练的模型,在CT和MR的所有验证中,DSC均值均高于0.9。根据混合模态训练模型与纯MR训练模型的比较结果,在所有三种不同模态比例下,前者的DSC均值均显著大于或等于后者。体积差异均显著低于先前一种方法体积量化误差的三分之一,并且在三次验证中有RCC和无RCC的肾脏上,体积的皮尔逊相关系数均高于0.96。

结论

无论有无MR数据监督,CT图像均可通过泛化用于辅助MR图像分割。这种能力显示出可接受的稳健性。建立了一种用于在CT和MR图像上准确测量肾实质体积的工具。

相似文献

1
Using CT images to assist the segmentation of MR images via generalization: Segmentation of the renal parenchyma of renal carcinoma patients.通过泛化利用CT图像辅助MR图像分割:肾癌患者肾实质的分割
Med Phys. 2025 Feb;52(2):951-964. doi: 10.1002/mp.17494. Epub 2024 Nov 4.
2
An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets.利用 U-Nets 研究脂肪抑制和维度对乳腺 MRI 分割准确性的影响。
Med Phys. 2019 Mar;46(3):1230-1244. doi: 10.1002/mp.13375. Epub 2019 Feb 4.
3
Effect of Dataset Size and Medical Image Modality on Convolutional Neural Network Model Performance for Automated Segmentation: A CT and MR Renal Tumor Imaging Study.数据集大小和医学图像模态对卷积神经网络模型自动分割性能的影响:CT 和 MR 肾肿瘤成像研究。
J Digit Imaging. 2023 Aug;36(4):1770-1781. doi: 10.1007/s10278-023-00804-1. Epub 2023 Mar 17.
4
Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: Comparison with intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset.基于 CT 的全自动胰腺分割的两阶段深度学习模型:在外部数据集上比较全剂量和低剂量下的同读者和异读者可靠性。
Med Phys. 2021 May;48(5):2468-2481. doi: 10.1002/mp.14782. Epub 2021 Mar 16.
5
Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets.跨模态(CT-MRI)先验增强深度学习在从小的 MRI 数据集稳健的肺肿瘤分割。
Med Phys. 2019 Oct;46(10):4392-4404. doi: 10.1002/mp.13695. Epub 2019 Aug 20.
6
Multi-modal segmentation with missing image data for automatic delineation of gross tumor volumes in head and neck cancers.多模态分割中存在图像缺失数据的情况下,实现头颈部癌症大体肿瘤体积的自动勾画。
Med Phys. 2024 Oct;51(10):7295-7307. doi: 10.1002/mp.17260. Epub 2024 Jun 19.
7
Deep learning-based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net.基于深度学习的 nnU-Net 自配置对 CT 和 MRI 下咽鳞状细胞癌的多模态分割。
Eur Radiol. 2024 Aug;34(8):5389-5400. doi: 10.1007/s00330-024-10585-y. Epub 2024 Jan 20.
8
Patient-specific transfer learning for auto-segmentation in adaptive 0.35 T MRgRT of prostate cancer: a bi-centric evaluation.用于前列腺癌自适应 0.35 T MRgRT 自动分割的患者特异性迁移学习:双中心评估
Med Phys. 2023 Mar;50(3):1573-1585. doi: 10.1002/mp.16056. Epub 2022 Nov 7.
9
Minimally interactive segmentation of soft-tissue tumors on CT and MRI using deep learning.利用深度学习对CT和MRI上的软组织肿瘤进行最小交互分割。
Eur Radiol. 2025 May;35(5):2736-2745. doi: 10.1007/s00330-024-11167-8. Epub 2024 Nov 19.
10
Incremental value of automatically segmented perirenal adipose tissue for pathological grading of clear cell renal cell carcinoma: a multicenter cohort study.自动分割的肾周脂肪组织对透明细胞肾细胞癌病理分级的增量价值:一项多中心队列研究。
Int J Surg. 2024 Jul 1;110(7):4221-4230. doi: 10.1097/JS9.0000000000001358.