• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自训练和内部配准的稀疏标注三维图像分割。

3D Image Segmentation With Sparse Annotation by Self-Training and Internal Registration.

出版信息

IEEE J Biomed Health Inform. 2021 Jul;25(7):2665-2672. doi: 10.1109/JBHI.2020.3038847. Epub 2021 Jul 27.

DOI:10.1109/JBHI.2020.3038847
PMID:33211667
Abstract

Anatomical image segmentation is one of the foundations for medical planning. Recently, convolutional neural networks (CNN) have achieved much success in segmenting volumetric (3D) images when a large number of fully annotated 3D samples are available. However, rarely a volumetric medical image dataset containing a sufficient number of segmented 3D images is accessible since providing manual segmentation masks is monotonous and time-consuming. Thus, to alleviate the burden of manual annotation, we attempt to effectively train a 3D CNN using a sparse annotation where ground truth on just one 2D slice of the axial axis of each training 3D image is available. To tackle this problem, we propose a self-training framework that alternates between two steps consisting of assigning pseudo annotations to unlabeled voxels and updating the 3D segmentation network by employing both the labeled and pseudo labeled voxels. To produce pseudo labels more accurately, we benefit from both propagation of labels (or pseudo-labels) between adjacent slices and 3D processing of voxels. More precisely, a 2D registration-based method is proposed to gradually propagate labels between consecutive 2D slices and a 3D U-Net is employed to utilize volumetric information. Ablation studies on benchmarks show that cooperation between the 2D registration and the 3D segmentation provides accurate pseudo-labels that enable the segmentation network to be trained effectively when for each training sample only even one segmented slice by an expert is available. Our method is assessed on the CHAOS and Visceral datasets to segment abdominal organs. Results demonstrate that despite utilizing just one segmented slice for each 3D image (that is weaker supervision in comparison with the compared weakly supervised methods) can result in higher performance and also achieve closer results to the fully supervised manner.

摘要

解剖图像分割是医学规划的基础之一。当有大量完全标注的 3D 样本可用时,最近的卷积神经网络(CNN)在分割体积(3D)图像方面取得了很大的成功。然而,由于提供手动分割掩模单调且耗时,很少有包含足够数量分割 3D 图像的体积医学图像数据集可用。因此,为了减轻手动注释的负担,我们尝试使用稀疏注释来有效地训练 3D CNN,其中每个训练 3D 图像的轴向轴的一个 2D 切片上提供真实标签。为了解决这个问题,我们提出了一种自训练框架,该框架交替进行两个步骤,包括为未标记的体素分配伪标签和通过使用标记和伪标记的体素来更新 3D 分割网络。为了更准确地生成伪标签,我们受益于标签(或伪标签)在相邻切片之间的传播和体素的 3D 处理。更准确地说,提出了一种基于 2D 配准的方法来在连续的 2D 切片之间逐步传播标签,并且使用 3D U-Net 来利用体积信息。在基准上的消融研究表明,2D 配准和 3D 分割之间的合作提供了准确的伪标签,使得分割网络能够在每个训练样本仅由专家提供一个分割切片的情况下有效训练。我们的方法在 CHAOS 和 Visceral 数据集上进行了评估,以分割腹部器官。结果表明,尽管每个 3D 图像仅使用一个分割切片(与比较的弱监督方法相比,监督较弱),但可以获得更高的性能,并且还可以接近完全监督的方式。

相似文献

1
3D Image Segmentation With Sparse Annotation by Self-Training and Internal Registration.基于自训练和内部配准的稀疏标注三维图像分割。
IEEE J Biomed Health Inform. 2021 Jul;25(7):2665-2672. doi: 10.1109/JBHI.2020.3038847. Epub 2021 Jul 27.
2
Sparse annotation learning for dense volumetric MR image segmentation with uncertainty estimation.基于稀疏标注学习的密集体磁共振图像分割及其不确定性估计。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad111b.
3
Evaluation of multislice inputs to convolutional neural networks for medical image segmentation.评估卷积神经网络的多切片输入在医学图像分割中的应用。
Med Phys. 2020 Dec;47(12):6216-6231. doi: 10.1002/mp.14391. Epub 2020 Nov 10.
4
CMC-Net: 3D calf muscle compartment segmentation with sparse annotation.CMC-Net:基于稀疏标注的 3D 小腿肌肉解剖分割
Med Image Anal. 2022 Jul;79:102460. doi: 10.1016/j.media.2022.102460. Epub 2022 Apr 21.
5
Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.基于 FCN 投票方法的三维 CT 图像节段外观的深度学习用于解剖结构分割。
Med Phys. 2017 Oct;44(10):5221-5233. doi: 10.1002/mp.12480. Epub 2017 Aug 31.
6
SAC-Net: Learning with weak and noisy labels in histopathology image segmentation.SAC-Net:在组织病理学图像分割中利用弱标签和噪声标签进行学习
Med Image Anal. 2023 May;86:102790. doi: 10.1016/j.media.2023.102790. Epub 2023 Mar 2.
7
DMSPS: Dynamically mixed soft pseudo-label supervision for scribble-supervised medical image segmentation.DMSPS:用于涂鸦监督的医学图像分割的动态混合软伪标签监督。
Med Image Anal. 2024 Oct;97:103274. doi: 10.1016/j.media.2024.103274. Epub 2024 Jul 15.
8
Semi-supervised learning framework with shape encoding for neonatal ventricular segmentation from 3D ultrasound.基于形状编码的半监督学习框架,用于从 3D 超声中分割新生儿心室。
Med Phys. 2024 Sep;51(9):6134-6148. doi: 10.1002/mp.17242. Epub 2024 Jun 10.
9
Light mixed-supervised segmentation for 3D medical image data.基于混合监督的 3D 医学图像数据分割。
Med Phys. 2024 Jan;51(1):167-178. doi: 10.1002/mp.16816. Epub 2023 Nov 1.
10
Label cleaning and propagation for improved segmentation performance using fully convolutional networks.基于全卷积网络的标签清洗和传播以提高分割性能。
Int J Comput Assist Radiol Surg. 2021 Mar;16(3):349-361. doi: 10.1007/s11548-021-02312-5. Epub 2021 Mar 3.

引用本文的文献

1
Deep learning models in classifying primary bone tumors and bone infections based on radiographs.基于X光片的深度学习模型在原发性骨肿瘤和骨感染分类中的应用
NPJ Precis Oncol. 2025 Mar 13;9(1):72. doi: 10.1038/s41698-025-00855-3.
2
Application of three-dimensional printing in the planning and execution of aortic aneurysm repair.三维打印在主动脉瘤修复手术规划与实施中的应用。
Front Cardiovasc Med. 2025 Jan 29;11:1485267. doi: 10.3389/fcvm.2024.1485267. eCollection 2024.
3
Automatic segmentation of white matter hyperintensities and correlation analysis for cerebral small vessel disease.
脑小血管病白质高信号的自动分割及相关性分析
Front Neurol. 2023 Jul 27;14:1242685. doi: 10.3389/fneur.2023.1242685. eCollection 2023.
4
Detection of Breast Cancer Lump and BRCA1/2 Genetic Mutation under Deep Learning.深度学习在乳腺癌肿块和 BRCA1/2 基因突变检测中的应用
Comput Intell Neurosci. 2022 Sep 19;2022:9591781. doi: 10.1155/2022/9591781. eCollection 2022.