• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自监督学习中相似性损失与聚类损失的关系学习。

Learning the Relation Between Similarity Loss and Clustering Loss in Self-Supervised Learning.

出版信息

IEEE Trans Image Process. 2023;32:3442-3454. doi: 10.1109/TIP.2023.3276708. Epub 2023 Jun 19.

DOI:10.1109/TIP.2023.3276708
PMID:37227917
Abstract

Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed. Contrastive learning exploits instance-level information to learn robust features. However, the learned information is probably confined to different views of the same instance. In this paper, we attempt to leverage the similarity between two distinct images to boost representation in self-supervised learning. In contrast to instance-level information, the similarity between two distinct images may provide more useful information. Besides, we analyze the relation between similarity loss and feature-level cross-entropy loss. These two losses are essential for most deep learning methods. However, the relation between these two losses is not clear. Similarity loss helps obtain instance-level representation, while feature-level cross-entropy loss helps mine the similarity between two distinct images. We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results. Code is available at https://github.com/guijiejie/ICCL.

摘要

自监督学习使网络能够从大量数据本身中学习到有区别的特征。大多数最先进的方法都是基于对比学习来最大化一张图像的两种增强之间的相似性。通过利用两种增强之间的一致性,可以减轻人工注释的负担。对比学习利用实例级别的信息来学习鲁棒的特征。然而,所学到的信息可能仅限于同一实例的不同视图。在本文中,我们尝试利用两幅不同图像之间的相似性来促进自监督学习中的表示。与实例级别的信息相比,两幅不同图像之间的相似性可能提供更有用的信息。此外,我们分析了相似性损失和特征级交叉熵损失之间的关系。这两种损失对于大多数深度学习方法都是必不可少的。然而,这两种损失之间的关系并不清楚。相似性损失有助于获得实例级别的表示,而特征级交叉熵损失有助于挖掘两幅不同图像之间的相似性。我们提供了理论分析和实验,表明这两种损失的适当组合可以得到最先进的结果。代码可在 https://github.com/guijiejie/ICCL 上获得。

相似文献

1
Learning the Relation Between Similarity Loss and Clustering Loss in Self-Supervised Learning.自监督学习中相似性损失与聚类损失的关系学习。
IEEE Trans Image Process. 2023;32:3442-3454. doi: 10.1109/TIP.2023.3276708. Epub 2023 Jun 19.
2
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
3
Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning.播种视图:用于对比表示学习的层次语义对齐
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3753-3767. doi: 10.1109/TPAMI.2022.3176690. Epub 2023 Feb 3.
4
MixIR: Mixing Input and Representations for Contrastive Learning.MixIR:用于对比学习的输入与表征混合
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8255-8264. doi: 10.1109/TNNLS.2024.3439538. Epub 2025 May 2.
5
Learning clustering-friendly representations via partial information discrimination and cross-level interaction.通过部分信息判别和跨层交互学习聚类友好的表示。
Neural Netw. 2024 Dec;180:106696. doi: 10.1016/j.neunet.2024.106696. Epub 2024 Sep 3.
6
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
7
Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.从部分到整体:通过重建进行自监督对比学习
Domain Adapt Represent Transf Distrib Collab Learn (2020). 2020 Oct;12444:85-95. doi: 10.1007/978-3-030-60548-3_9. Epub 2020 Sep 26.
8
TSSK-Net: Weakly supervised biomarker localization and segmentation with image-level annotation in retinal OCT images.TSSK-Net:基于图像级标注的视网膜 OCT 图像弱监督生物标志物定位与分割。
Comput Biol Med. 2023 Feb;153:106467. doi: 10.1016/j.compbiomed.2022.106467. Epub 2022 Dec 21.
9
Instance-Level Contrastive Learning for Weakly Supervised Object Detection.基于实例对比的弱监督目标检测。
Sensors (Basel). 2022 Oct 4;22(19):7525. doi: 10.3390/s22197525.
10
Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification.用于半监督时间序列分类的自监督对比表示学习
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15604-15618. doi: 10.1109/TPAMI.2023.3308189. Epub 2023 Nov 3.

引用本文的文献

1
Dynamic Graph Clustering Learning for Unsupervised Diabetic Retinopathy Classification.用于无监督糖尿病视网膜病变分类的动态图聚类学习
Diagnostics (Basel). 2023 Oct 19;13(20):3251. doi: 10.3390/diagnostics13203251.