• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于内镜病变分割的深度弱半监督框架。

A deep weakly semi-supervised framework for endoscopic lesion segmentation.

作者信息

Shi Yuxuan, Wang Hong, Ji Haoqin, Liu Haozhe, Li Yuexiang, He Nanjun, Wei Dong, Huang Yawen, Dai Qi, Wu Jianrong, Chen Xinrong, Zheng Yefeng, Yu Hongmeng

机构信息

ENT Institute and Department of Otolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, 200031, China.

Tencent Jarvis Lab, Shenzhen 518000, China.

出版信息

Med Image Anal. 2023 Dec;90:102973. doi: 10.1016/j.media.2023.102973. Epub 2023 Sep 20.

DOI:10.1016/j.media.2023.102973
PMID:37757643
Abstract

In the field of medical image analysis, accurate lesion segmentation is beneficial for the subsequent clinical diagnosis and treatment planning. Currently, various deep learning-based methods have been proposed to deal with the segmentation task. Albeit achieving some promising performances, the fully-supervised learning approaches require pixel-level annotations for model training, which is tedious and time-consuming for experienced radiologists to collect. In this paper, we propose a weakly semi-supervised segmentation framework, called Point Segmentation Transformer (Point SEGTR). Particularly, the framework utilizes a small amount of fully-supervised data with pixel-level segmentation masks and a large amount of weakly-supervised data with point-level annotations (i.e., annotating a point inside each object) for network training, which largely reduces the demand of pixel-level annotations significantly. To fully exploit the pixel-level and point-level annotations, we propose two regularization terms, i.e., multi-point consistency and symmetric consistency, to boost the quality of pseudo labels, which are then adopted to train a student model for inference. Extensive experiments are conducted on three endoscopy datasets with different lesion structures and several body sites (e.g., colorectal and nasopharynx). Comprehensive experimental results finely substantiate the effectiveness and the generality of our proposed method, as well as its potential to loosen the requirements of pixel-level annotations, which is valuable for clinical applications.

摘要

在医学图像分析领域,准确的病变分割有助于后续的临床诊断和治疗规划。目前,已经提出了各种基于深度学习的方法来处理分割任务。尽管取得了一些令人鼓舞的性能,但全监督学习方法需要用于模型训练的像素级注释,这对于经验丰富的放射科医生来说收集起来既繁琐又耗时。在本文中,我们提出了一种弱半监督分割框架,称为点分割变换器(Point SEGTR)。具体而言,该框架利用少量带有像素级分割掩码的全监督数据和大量带有点级注释(即在每个对象内部注释一个点)的弱监督数据进行网络训练,这大大降低了对像素级注释的需求。为了充分利用像素级和点级注释,我们提出了两个正则化项,即多点一致性和对称一致性,以提高伪标签的质量,然后采用这些伪标签来训练一个学生模型进行推理。我们在三个具有不同病变结构和多个身体部位(如结肠直肠和鼻咽)的内窥镜数据集上进行了广泛的实验。综合实验结果充分证实了我们提出的方法的有效性和通用性,以及它放宽像素级注释要求的潜力,这对于临床应用具有重要价值。

相似文献

1
A deep weakly semi-supervised framework for endoscopic lesion segmentation.一种用于内镜病变分割的深度弱半监督框架。
Med Image Anal. 2023 Dec;90:102973. doi: 10.1016/j.media.2023.102973. Epub 2023 Sep 20.
2
Uncertainty-guided cross learning via CNN and transformer for semi-supervised honeycomb lung lesion segmentation.基于 CNN 和 Transformer 的不确定性引导交叉学习在半监督蜂窝肺病变分割中的应用。
Phys Med Biol. 2023 Dec 11;68(24). doi: 10.1088/1361-6560/ad0eb2.
3
Cyclic Learning: Bridging Image-Level Labels and Nuclei Instance Segmentation.循环学习:连接图像级标签和细胞核实例分割。
IEEE Trans Med Imaging. 2023 Oct;42(10):3104-3116. doi: 10.1109/TMI.2023.3275609. Epub 2023 Oct 2.
4
Segmentation only uses sparse annotations: Unified weakly and semi-supervised learning in medical images.仅使用稀疏注释的分割:医学图像中的统一弱监督和半监督学习。
Med Image Anal. 2022 Aug;80:102515. doi: 10.1016/j.media.2022.102515. Epub 2022 Jun 17.
5
PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation.PyMIC:一个用于高效医学图像分割的深度学习工具包。
Comput Methods Programs Biomed. 2023 Apr;231:107398. doi: 10.1016/j.cmpb.2023.107398. Epub 2023 Feb 7.
6
Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images.基于部分点标注的弱监督深度学习细胞核分割方法在病理图像中的应用
IEEE Trans Med Imaging. 2020 Nov;39(11):3655-3666. doi: 10.1109/TMI.2020.3002244. Epub 2020 Oct 28.
7
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
8
Weakly-supervised convolutional neural networks of renal tumor segmentation in abdominal CTA images.腹部 CT 血管造影图像中肾肿瘤分割的弱监督卷积神经网络。
BMC Med Imaging. 2020 Apr 15;20(1):37. doi: 10.1186/s12880-020-00435-w.
9
MaskMitosis: a deep learning framework for fully supervised, weakly supervised, and unsupervised mitosis detection in histopathology images.MaskMitosis:一种深度学习框架,用于在组织病理学图像中进行全监督、弱监督和无监督的有丝分裂检测。
Med Biol Eng Comput. 2020 Jul;58(7):1603-1623. doi: 10.1007/s11517-020-02175-z. Epub 2020 May 22.
10
Weakly supervised segmentation on neural compressed histopathology with self-equivariant regularization.基于自协变正则化的神经压缩组织病理学弱监督分割。
Med Image Anal. 2022 Aug;80:102482. doi: 10.1016/j.media.2022.102482. Epub 2022 May 25.

引用本文的文献

1
Postoperative outcome analysis of chronic rhinosinusitis using transfer learning with pre-trained foundation models based on endoscopic images: a multicenter, observational study.基于内镜图像使用预训练基础模型的迁移学习对慢性鼻-鼻窦炎的术后结果分析:一项多中心观察性研究
Biomed Eng Online. 2025 Jul 27;24(1):95. doi: 10.1186/s12938-025-01428-y.
2
Precision enhancement in wireless capsule endoscopy: a novel transformer-based approach for real-time video object detection.无线胶囊内镜中的精度增强:一种基于新型变压器的实时视频目标检测方法。
Front Artif Intell. 2025 Apr 30;8:1529814. doi: 10.3389/frai.2025.1529814. eCollection 2025.
3
Real-time artificial intelligence-assisted detection and segmentation of nasopharyngeal carcinoma using multimodal endoscopic data: a multi-center, prospective study.
使用多模态内镜数据的鼻咽癌实时人工智能辅助检测与分割:一项多中心前瞻性研究
EClinicalMedicine. 2025 Feb 15;81:103120. doi: 10.1016/j.eclinm.2025.103120. eCollection 2025 Mar.