Suppr超能文献

监督和自监督深度学习在组织学图像上的表现比较。

Comparison of Supervised and Self-Supervised Deep Representations Trained on Histological Images.

机构信息

Faculty of Mathematics and Computer Science, Jagiellonian University, Lojasiewicza 6, 30-348 Kraków, Poland.

Ardigen SA, Podole 76, 30-394 Kraków, Poland.

出版信息

Stud Health Technol Inform. 2022 Jun 6;290:1052-1053. doi: 10.3233/SHTI220263.

Abstract

Self-supervised methods gain more and more attention, especially in the medical domain, where the number of labeled data is limited. They provide results on par or superior to their fully supervised competitors, yet the difference between information coded by both methods is unclear. This work introduces a novel comparison framework for explaining differences between supervised and self-supervised models using visual characteristics important to the human perceptual system. We apply this framework to models trained for Gleason score and conclude that self-supervised methods are more biased toward contrast and texture transformation than their supervised counterparts. At the same time, supervised methods code more information about the shape.

摘要

自监督方法越来越受到关注,特别是在医学领域,那里的标记数据数量有限。它们提供的结果与完全监督的竞争对手相当或更好,但这两种方法编码的信息之间的差异尚不清楚。这项工作介绍了一种新的比较框架,用于使用对人类感知系统重要的视觉特征来解释监督和自监督模型之间的差异。我们将该框架应用于针对 Gleason 评分训练的模型,并得出结论,自监督方法比其监督方法更偏向于对比度和纹理变换。同时,监督方法编码了更多关于形状的信息。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验