Suppr超能文献

联合分割:一种用于半监督医学图像分割的通用范式。

Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image Segmentation.

作者信息

Zeng Qingjie, Xie Yutong, Lu Zilin, Lu Mengkang, Wu Yicheng, Xia Yong

出版信息

IEEE Trans Med Imaging. 2025 Jul;44(7):2948-2959. doi: 10.1109/TMI.2025.3556310.

Abstract

The scarcity of annotations has become a significant obstacle in training powerful deep-learning models for medical image segmentation, limiting their clinical application. To overcome this, semi-supervised learning that leverages abundant unlabeled data is highly desirable to enhance model training. However, most existing works still focus on specific medical tasks and underestimate the potential of learning across diverse tasks and datasets. In this paper, we propose a Versatile Semi-supervised framework (VerSemi) to present a new perspective that integrates various SSL tasks into a unified model with an extensive label space, exploiting more unlabeled data for semi-supervised medical image segmentation. Specifically, we introduce a dynamic task-prompted design to segment various targets from different datasets. Next, this unified model is used to identify the foreground regions from all labeled data, capturing cross-dataset semantics. Particularly, we create a synthetic task with a CutMix strategy to augment foreground targets within the expanded label space. To effectively utilize unlabeled data, we introduce a consistency constraint that aligns aggregated predictions from various tasks with those from the synthetic task, further guiding the model to accurately segment foreground regions during training. We evaluated our VerSemi framework against seven established SSL methods on four public benchmarking datasets. Our results suggest that VerSemi consistently outperforms all competing methods, beating the second-best method with a 2.69% average Dice gain on four datasets and setting a new state of the art for semi-supervised medical image segmentation. Code is available at https://github.com/maxwell0027/VerSemi.

摘要

注释的稀缺已成为训练用于医学图像分割的强大深度学习模型的重大障碍,限制了它们的临床应用。为了克服这一问题,利用大量未标记数据的半监督学习对于加强模型训练非常有必要。然而,大多数现有工作仍专注于特定的医学任务,低估了跨不同任务和数据集学习的潜力。在本文中,我们提出了一种通用半监督框架(VerSemi),以呈现一种新的视角,即将各种半监督学习任务集成到一个具有广泛标签空间的统一模型中,利用更多未标记数据进行半监督医学图像分割。具体而言,我们引入了一种动态任务提示设计,以分割来自不同数据集的各种目标。接下来,这个统一模型用于从所有标记数据中识别前景区域,捕捉跨数据集语义。特别地,我们使用CutMix策略创建一个合成任务,以在扩展的标签空间内增强前景目标。为了有效利用未标记数据,我们引入了一种一致性约束,使来自各种任务的聚合预测与来自合成任务的预测对齐,进一步引导模型在训练期间准确分割前景区域。我们在四个公共基准数据集上针对七种既定的半监督学习方法评估了我们的VerSemi框架。我们的结果表明,VerSemi始终优于所有竞争方法,在四个数据集上比第二好的方法平均Dice增益高2.69%,并为半监督医学图像分割设定了新的技术水平。代码可在https://github.com/maxwell0027/VerSemi获取。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验