• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

监督和自监督深度学习在组织学图像上的表现比较。

Comparison of Supervised and Self-Supervised Deep Representations Trained on Histological Images.

机构信息

Faculty of Mathematics and Computer Science, Jagiellonian University, Lojasiewicza 6, 30-348 Kraków, Poland.

Ardigen SA, Podole 76, 30-394 Kraków, Poland.

出版信息

Stud Health Technol Inform. 2022 Jun 6;290:1052-1053. doi: 10.3233/SHTI220263.

DOI:10.3233/SHTI220263
PMID:35673201
Abstract

Self-supervised methods gain more and more attention, especially in the medical domain, where the number of labeled data is limited. They provide results on par or superior to their fully supervised competitors, yet the difference between information coded by both methods is unclear. This work introduces a novel comparison framework for explaining differences between supervised and self-supervised models using visual characteristics important to the human perceptual system. We apply this framework to models trained for Gleason score and conclude that self-supervised methods are more biased toward contrast and texture transformation than their supervised counterparts. At the same time, supervised methods code more information about the shape.

摘要

自监督方法越来越受到关注,特别是在医学领域,那里的标记数据数量有限。它们提供的结果与完全监督的竞争对手相当或更好,但这两种方法编码的信息之间的差异尚不清楚。这项工作介绍了一种新的比较框架,用于使用对人类感知系统重要的视觉特征来解释监督和自监督模型之间的差异。我们将该框架应用于针对 Gleason 评分训练的模型,并得出结论,自监督方法比其监督方法更偏向于对比度和纹理变换。同时,监督方法编码了更多关于形状的信息。

相似文献

1
Comparison of Supervised and Self-Supervised Deep Representations Trained on Histological Images.监督和自监督深度学习在组织学图像上的表现比较。
Stud Health Technol Inform. 2022 Jun 6;290:1052-1053. doi: 10.3233/SHTI220263.
2
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.
3
Self-supervised driven consistency training for annotation efficient histopathology image analysis.用于高效标注组织病理学图像分析的自监督驱动一致性训练
Med Image Anal. 2022 Jan;75:102256. doi: 10.1016/j.media.2021.102256. Epub 2021 Oct 13.
4
Deep virtual adversarial self-training with consistency regularization for semi-supervised medical image classification.深度对偶对抗自训练与一致性正则化在半监督医学图像分类中的应用。
Med Image Anal. 2021 May;70:102010. doi: 10.1016/j.media.2021.102010. Epub 2021 Feb 22.
5
WeGleNet: A weakly-supervised convolutional neural network for the semantic segmentation of Gleason grades in prostate histology images.WeGleNet:一种用于前列腺组织学图像中 Gleason 分级语义分割的弱监督卷积神经网络。
Comput Med Imaging Graph. 2021 Mar;88:101846. doi: 10.1016/j.compmedimag.2020.101846. Epub 2021 Jan 13.
6
Heuristic Attention Representation Learning for Self-Supervised Pretraining.启发式注意力表示学习的自监督预训练。
Sensors (Basel). 2022 Jul 10;22(14):5169. doi: 10.3390/s22145169.
7
Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images.基于部分点标注的弱监督深度学习细胞核分割方法在病理图像中的应用
IEEE Trans Med Imaging. 2020 Nov;39(11):3655-3666. doi: 10.1109/TMI.2020.3002244. Epub 2020 Oct 28.
8
Constrained-CNN losses for weakly supervised segmentation.约束卷积神经网络损失的弱监督分割。
Med Image Anal. 2019 May;54:88-99. doi: 10.1016/j.media.2019.02.009. Epub 2019 Feb 13.
9
Fine-Grained Self-Supervised Learning with Jigsaw puzzles for medical image classification.基于拼图的细粒度自监督学习在医学图像分类中的应用。
Comput Biol Med. 2024 May;174:108460. doi: 10.1016/j.compbiomed.2024.108460. Epub 2024 Apr 8.
10
Semi-supervised learning for automatic segmentation of the knee from MRI with convolutional neural networks.基于卷积神经网络的膝关节 MRI 半自动分割的半监督学习。
Comput Methods Programs Biomed. 2020 Jun;189:105328. doi: 10.1016/j.cmpb.2020.105328. Epub 2020 Jan 11.