Suppr超能文献

基于轮廓变换网络的单次分割解剖结构方法。

Contour Transformer Network for One-Shot Segmentation of Anatomical Structures.

出版信息

IEEE Trans Med Imaging. 2021 Oct;40(10):2672-2684. doi: 10.1109/TMI.2020.3043375. Epub 2021 Sep 30.

Abstract

Accurate segmentation of anatomical structures is vital for medical image analysis. The state-of-the-art accuracy is typically achieved by supervised learning methods, where gathering the requisite expert-labeled image annotations in a scalable manner remains a main obstacle. Therefore, annotation-efficient methods that permit to produce accurate anatomical structure segmentation are highly desirable. In this work, we present Contour Transformer Network (CTN), a one-shot anatomy segmentation method with a naturally built-in human-in-the-loop mechanism. We formulate anatomy segmentation as a contour evolution process and model the evolution behavior by graph convolutional networks (GCNs). Training the CTN model requires only one labeled image exemplar and leverages additional unlabeled data through newly introduced loss functions that measure the global shape and appearance consistency of contours. On segmentation tasks of four different anatomies, we demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning methods. With minimal human-in-the-loop editing feedback, the segmentation performance can be further improved to surpass the fully supervised methods.

摘要

准确的解剖结构分割对于医学图像分析至关重要。目前最先进的方法通常是通过监督学习方法实现的,而以可扩展的方式收集所需的专家标记图像注释仍然是一个主要障碍。因此,需要能够生成准确的解剖结构分割的高效注释方法。在这项工作中,我们提出了轮廓变换网络(CTN),这是一种具有内置人机交互机制的一次性解剖分割方法。我们将解剖分割表述为轮廓演化过程,并通过图卷积网络(GCN)来建模演化行为。训练 CTN 模型仅需要一个标记的图像示例,并通过新引入的损失函数利用额外的未标记数据来衡量轮廓的全局形状和外观一致性。在四个不同解剖结构的分割任务中,我们证明我们的一次性学习方法明显优于非基于学习的方法,并与最先进的完全监督深度学习方法具有竞争力。通过最小的人机交互编辑反馈,可以进一步提高分割性能,超越完全监督方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验