• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A Weak and Semi-supervised Segmentation Method for Prostate Cancer in TRUS Images.一种用于 TRUS 图像中前列腺癌的弱监督和半监督分割方法。
J Digit Imaging. 2020 Aug;33(4):838-845. doi: 10.1007/s10278-020-00323-3.
2
Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification.弱监督和强监督学习的结合提高了 Gleason 模式分类中的强监督。
BMC Med Imaging. 2021 May 8;21(1):77. doi: 10.1186/s12880-021-00609-0.
3
Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy.基于弱监督学习的标签驱动 MRI-经直肠超声(TRUS)配准在 MRI 引导前列腺放疗中的应用。
Phys Med Biol. 2020 Jun 26;65(13):135002. doi: 10.1088/1361-6560/ab8cd6.
4
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
5
Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net.基于多方向深度监督 V-Net 的前列腺超声图像分割。
Med Phys. 2019 Jul;46(7):3194-3206. doi: 10.1002/mp.13577. Epub 2019 May 29.
6
Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images.基于临床多样的三维经直肠超声图像,利用深度学习进行前列腺自动分割。
Med Phys. 2020 Jun;47(6):2413-2426. doi: 10.1002/mp.14134. Epub 2020 Apr 8.
7
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.
8
MR to ultrasound image registration with segmentation-based learning for HDR prostate brachytherapy.基于分割的学习的 MR 与超声图像配准用于 HDR 前列腺近距离放射治疗。
Med Phys. 2021 Jun;48(6):3074-3083. doi: 10.1002/mp.14901. Epub 2021 May 14.
9
Uncertainty-guided cross learning via CNN and transformer for semi-supervised honeycomb lung lesion segmentation.基于 CNN 和 Transformer 的不确定性引导交叉学习在半监督蜂窝肺病变分割中的应用。
Phys Med Biol. 2023 Dec 11;68(24). doi: 10.1088/1361-6560/ad0eb2.
10
PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation.PyMIC:一个用于高效医学图像分割的深度学习工具包。
Comput Methods Programs Biomed. 2023 Apr;231:107398. doi: 10.1016/j.cmpb.2023.107398. Epub 2023 Feb 7.

引用本文的文献

1
U-Net benign prostatic hyperplasia-trained deep learning model for prostate ultrasound image segmentation in prostate cancer.用于前列腺癌中前列腺超声图像分割的经良性前列腺增生训练的U-Net深度学习模型。
Quant Imaging Med Surg. 2025 Jun 6;15(6):5424-5435. doi: 10.21037/qims-2024-2476. Epub 2025 May 30.
2
Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification.弱监督和强监督学习的结合提高了 Gleason 模式分类中的强监督。
BMC Med Imaging. 2021 May 8;21(1):77. doi: 10.1186/s12880-021-00609-0.

本文引用的文献

1
Epidemiology of Prostate Cancer.前列腺癌流行病学
World J Oncol. 2019 Apr;10(2):63-89. doi: 10.14740/wjon1191. Epub 2019 Apr 20.
2
Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images.联合弱监督深度学习在乳腺超声图像中肿块的定位和分类。
IEEE Trans Med Imaging. 2019 Mar;38(3):762-774. doi: 10.1109/TMI.2018.2872031. Epub 2018 Sep 24.
3
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.基于深度学习的自由手持超声引导前列腺活检实时分割方法
Med Image Anal. 2018 Aug;48:107-116. doi: 10.1016/j.media.2018.05.010. Epub 2018 Jun 1.
4
Fully automated detection of breast cancer in screening MRI using convolutional neural networks.使用卷积神经网络在乳腺筛查磁共振成像中实现乳腺癌的全自动检测。
J Med Imaging (Bellingham). 2018 Jan;5(1):014502. doi: 10.1117/1.JMI.5.1.014502. Epub 2018 Jan 11.
5
Prostate cancer screening.前列腺癌筛查
Investig Clin Urol. 2017 Jul;58(4):217-219. doi: 10.4111/icu.2017.58.4.217. Epub 2017 Jun 20.
6
Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks.基于多模态卷积神经网络的多参数磁共振成像中前列腺癌的自动诊断
Phys Med Biol. 2017 Jul 24;62(16):6497-6514. doi: 10.1088/1361-6560/aa7731.
7
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.基于深度学习的乳腺钼靶微钙化乳腺癌鉴别诊断
Sci Rep. 2016 Jun 7;6:27327. doi: 10.1038/srep27327.
8
Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.基于深度学习架构的计算机辅助诊断:在超声图像乳腺病变及CT扫描肺结节中的应用
Sci Rep. 2016 Apr 15;6:24454. doi: 10.1038/srep24454.
9
Magnetic resonance imaging-targeted biopsy may enhance the diagnostic accuracy of significant prostate cancer detection compared to standard transrectal ultrasound-guided biopsy: a systematic review and meta-analysis.磁共振成像靶向活检可能比标准经直肠超声引导活检更能提高显著前列腺癌检测的诊断准确性:系统评价和荟萃分析。
Eur Urol. 2015 Sep;68(3):438-50. doi: 10.1016/j.eururo.2014.11.037. Epub 2014 Dec 3.
10
Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.用于阿尔茨海默病/轻度认知障碍诊断的基于深度学习的分层特征表示与多模态融合
Neuroimage. 2014 Nov 1;101:569-82. doi: 10.1016/j.neuroimage.2014.06.077. Epub 2014 Jul 18.

一种用于 TRUS 图像中前列腺癌的弱监督和半监督分割方法。

A Weak and Semi-supervised Segmentation Method for Prostate Cancer in TRUS Images.

机构信息

Department of Computer Science and Information Engineering, Korea National University of Transportation, Uiwang-si, Kyunggi-do, South Korea.

Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si, Kyunggi-do, South Korea.

出版信息

J Digit Imaging. 2020 Aug;33(4):838-845. doi: 10.1007/s10278-020-00323-3.

DOI:10.1007/s10278-020-00323-3
PMID:32043178
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7522148/
Abstract

The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.

摘要

本研究旨在开发一个弱监督和半监督的深度学习框架,以分割经直肠超声(TRUS)图像中的前列腺癌,减轻放射科医生绘制病变边界的耗时工作,并在没有完整注释的数据上训练神经网络。建立了一个经组织学证实的 102 例图像基准数据集,并随机选择了 22 例进行评估。部分训练图像采用强监督方式进行像素级注释。使用强监督图像训练深度神经网络。将其余仅具有弱监督的训练图像(仅为病变位置)输入到经过训练的网络中,为弱监督图像生成中间像素级标签。然后,我们使用原始标签和中间标签重新训练神经网络,并将训练图像输入到重新训练的网络中,生成精炼标签。比较精炼标签和中间标签的质心距离与弱监督位置的距离,更近的标签将替换先前的标签,这可以视为标签更新。标签更新后,将测试集图像输入到重新训练的网络中进行评估。与仅使用少量强监督数据的方法相比,该方法在使用弱监督和半监督数据时具有更好的结果,尽管改进可能不如使用完全强监督数据集时那么大。在平均交并比(mIoU)方面,当强监督数据的比例为 40%时,该方法达到约 0.6,与 100%强监督病例相比性能下降约 2%。该方法似乎能够帮助放射科医生减轻绘制病变边界的耗时工作,并在没有完整注释的数据上训练神经网络。