• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于迭代深度学习的高效轮廓标注,用于从体绘制医学图像中分割器官。

Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images.

机构信息

School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China.

Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.

出版信息

Int J Comput Assist Radiol Surg. 2023 Feb;18(2):379-394. doi: 10.1007/s11548-022-02730-z. Epub 2022 Sep 1.

DOI:10.1007/s11548-022-02730-z
PMID:36048319
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9889459/
Abstract

PURPOSE

Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden.

METHODS

We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading.

RESULTS

For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set.

CONCLUSION

Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.

摘要

目的

训练深度神经网络通常需要大量的人工标注数据。对于容积医学图像中的器官分割,人工标注既繁琐又低效。为了节省人力并加速训练过程,最近,通过迭代深度学习进行标注的策略在研究界变得流行起来。然而,由于缺乏领域知识或高效的人机交互工具,当前的 AID 方法仍然存在训练时间长和标注负担大的问题。

方法

我们开发了一种基于轮廓的迭代深度学习(AID)算法,该算法使用边界表示代替体素标签,从而纳入高级器官形状知识。我们提出了一种具有多尺度特征提取骨干网络的轮廓分割网络,以提高边界检测精度。我们还开发了一种基于轮廓的人机交互方法,以方便对器官边界进行轻松调整。通过结合基于轮廓的分割网络和基于轮廓的调整干预方法,我们的算法实现了快速的少样本学习和高效的人工校对。

结果

为了验证,两名操作人员分别使用我们的方法和两种对比方法(即传统的轮廓插值方法和基于体素标签表示的最新卷积网络(CNN)方法)在 CT 图像上独立标注了四个腹部器官。与这些方法相比,我们的方法大大节省了标注时间,减少了不同操作人员之间的差异。我们的轮廓检测网络在使用小训练集的情况下,在生成具有合理解剖结构的器官形状方面也优于 SOTA nnU-Net。

结论

利用边界形状先验和轮廓表示,我们的方法比容积医学图像中器官分割的最新 AID 方法更高效、更准确,且不易受不同操作人员的影响。其良好的形状学习能力和灵活的边界调整功能使其适合对具有规则形状的器官结构进行快速标注。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/c26338bc1312/11548_2022_2730_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/654b7bf59791/11548_2022_2730_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/ee48dd323200/11548_2022_2730_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/74ff8099e005/11548_2022_2730_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/02bcdad191a0/11548_2022_2730_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/702439b15da5/11548_2022_2730_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/c8912b632962/11548_2022_2730_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/beffb04e79f0/11548_2022_2730_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/1beb04e730d2/11548_2022_2730_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/4c84f9022c60/11548_2022_2730_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/5ecafc81c5ed/11548_2022_2730_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/d1ad6e845d02/11548_2022_2730_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/653674e64fb9/11548_2022_2730_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/c26338bc1312/11548_2022_2730_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/654b7bf59791/11548_2022_2730_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/ee48dd323200/11548_2022_2730_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/74ff8099e005/11548_2022_2730_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/02bcdad191a0/11548_2022_2730_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/702439b15da5/11548_2022_2730_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/c8912b632962/11548_2022_2730_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/beffb04e79f0/11548_2022_2730_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/1beb04e730d2/11548_2022_2730_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/4c84f9022c60/11548_2022_2730_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/5ecafc81c5ed/11548_2022_2730_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/d1ad6e845d02/11548_2022_2730_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/653674e64fb9/11548_2022_2730_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a85a/9889459/c26338bc1312/11548_2022_2730_Fig13_HTML.jpg

相似文献

1
Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images.基于迭代深度学习的高效轮廓标注,用于从体绘制医学图像中分割器官。
Int J Comput Assist Radiol Surg. 2023 Feb;18(2):379-394. doi: 10.1007/s11548-022-02730-z. Epub 2022 Sep 1.
2
Fast interactive medical image segmentation with weakly supervised deep learning method.基于弱监督深度学习方法的快速交互式医学图像分割。
Int J Comput Assist Radiol Surg. 2020 Sep;15(9):1437-1444. doi: 10.1007/s11548-020-02223-x. Epub 2020 Jul 11.
3
Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks.使用基于形状表示模型约束的全卷积神经网络进行头颈部癌症放疗的全自动多器官分割。
Med Phys. 2018 Oct;45(10):4558-4567. doi: 10.1002/mp.13147. Epub 2018 Sep 19.
4
Contour-aware multi-label chest X-ray organ segmentation.基于轮廓感知的多标签 chest X-ray 器官分割。
Int J Comput Assist Radiol Surg. 2020 Mar;15(3):425-436. doi: 10.1007/s11548-019-02115-9. Epub 2020 Feb 7.
5
RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning.RIL-Contour:一款用于深度学习的医学影像数据集标注工具。
J Digit Imaging. 2019 Aug;32(4):571-581. doi: 10.1007/s10278-019-00232-0.
6
Three-stage segmentation of lung region from CT images using deep neural networks.基于深度神经网络的 CT 图像肺部三阶段分割。
BMC Med Imaging. 2021 Jul 15;21(1):112. doi: 10.1186/s12880-021-00640-1.
7
Annotation-efficient training of medical image segmentation network based on scribble guidance in difficult areas.基于困难区域标记指导的医学图像分割网络的高效标注训练。
Int J Comput Assist Radiol Surg. 2024 Jan;19(1):87-96. doi: 10.1007/s11548-023-02931-0. Epub 2023 May 26.
8
Lens structure segmentation from AS-OCT images via shape-based learning.通过基于形状的学习从AS-OCT图像中进行晶状体结构分割。
Comput Methods Programs Biomed. 2023 Mar;230:107322. doi: 10.1016/j.cmpb.2022.107322. Epub 2022 Dec 23.
9
Abdomen CT multi-organ segmentation using token-based MLP-Mixer.基于令牌的 MLP-Mixer 的腹部 CT 多器官分割。
Med Phys. 2023 May;50(5):3027-3038. doi: 10.1002/mp.16135. Epub 2022 Dec 20.
10
Automatic liver segmentation by integrating fully convolutional networks into active contour models.基于全卷积网络的主动轮廓模型自动肝脏分割
Med Phys. 2019 Oct;46(10):4455-4469. doi: 10.1002/mp.13735. Epub 2019 Aug 16.

引用本文的文献

1
Special Issue: Artificial Intelligence in Advanced Medical Imaging.特刊:高级医学成像中的人工智能
Bioengineering (Basel). 2024 Dec 5;11(12):1229. doi: 10.3390/bioengineering11121229.
2
A New Medical Analytical Framework for Automated Detection of MRI Brain Tumor Using Evolutionary Quantum Inspired Level Set Technique.一种基于进化量子启发水平集技术的用于自动检测MRI脑肿瘤的新型医学分析框架。
Bioengineering (Basel). 2023 Jul 9;10(7):819. doi: 10.3390/bioengineering10070819.

本文引用的文献

1
Few-shot medical image segmentation using a global correlation network with discriminative embedding.使用具有判别性嵌入的全局相关网络进行少样本医学图像分割。
Comput Biol Med. 2022 Jan;140:105067. doi: 10.1016/j.compbiomed.2021.105067. Epub 2021 Nov 27.
2
Weakly Supervised Segmentation of COVID19 Infection with Scribble Annotation on CT Images.基于CT图像上的涂鸦标注对COVID-19感染进行弱监督分割
Pattern Recognit. 2022 Feb;122:108341. doi: 10.1016/j.patcog.2021.108341. Epub 2021 Sep 20.
3
MIDeepSeg: Minimally interactive segmentation of unseen objects from medical images using deep learning.
MIDeepSeg:使用深度学习对医学图像中看不见的物体进行最少的交互分割。
Med Image Anal. 2021 Aug;72:102102. doi: 10.1016/j.media.2021.102102. Epub 2021 May 18.
4
Image-level supervised segmentation for human organs with confidence cues.基于置信度提示的人体器官图像级监督分割。
Phys Med Biol. 2021 Mar 8;66(6):065018. doi: 10.1088/1361-6560/abde98.
5
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.nnU-Net:一种基于深度学习的生物医学图像分割的自配置方法。
Nat Methods. 2021 Feb;18(2):203-211. doi: 10.1038/s41592-020-01008-z. Epub 2020 Dec 7.
6
Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation.用于半监督医学图像分割的变换一致自集成模型。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):523-534. doi: 10.1109/TNNLS.2020.2995319. Epub 2021 Feb 4.
7
RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning.RIL-Contour:一款用于深度学习的医学影像数据集标注工具。
J Digit Imaging. 2019 Aug;32(4):571-581. doi: 10.1007/s10278-019-00232-0.
8
Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks.基于密集 V 网络的腹部 CT 自动多器官分割。
IEEE Trans Med Imaging. 2018 Aug;37(8):1822-1834. doi: 10.1109/TMI.2018.2806309. Epub 2018 Feb 14.
9
DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation.DeepIGeoS:用于医学图像分割的深度交互式测地线框架。
IEEE Trans Pattern Anal Mach Intell. 2019 Jul;41(7):1559-1572. doi: 10.1109/TPAMI.2018.2840695. Epub 2018 Jun 1.
10
Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning.基于图像特定精细调整的深度学习的交互式医学图像分割。
IEEE Trans Med Imaging. 2018 Jul;37(7):1562-1573. doi: 10.1109/TMI.2018.2791721.