• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Can contrastive learning avoid shortcut solutions?对比学习能避免捷径解决方案吗?
Adv Neural Inf Process Syst. 2021 Dec;34:4974-4986.
2
Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound.用于胎儿超声的解剖学感知对比表示学习
Comput Vis ECCV. 2022 Oct;2022:422-436. doi: 10.1007/978-3-031-25066-8_23.
3
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
4
MediDRNet: Tackling category imbalance in diabetic retinopathy classification with dual-branch learning and prototypical contrastive learning.MediDRNet:使用双分支学习和原型对比学习解决糖尿病视网膜病变分类中的类别不平衡问题。
Comput Methods Programs Biomed. 2024 Aug;253:108230. doi: 10.1016/j.cmpb.2024.108230. Epub 2024 May 17.
5
Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.从部分到整体:通过重建进行自监督对比学习
Domain Adapt Represent Transf Distrib Collab Learn (2020). 2020 Oct;12444:85-95. doi: 10.1007/978-3-030-60548-3_9. Epub 2020 Sep 26.
6
GRLC: Graph Representation Learning With Constraints.GRLC:带约束的图表示学习
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8609-8622. doi: 10.1109/TNNLS.2022.3230979. Epub 2024 Jun 3.
7
SCEHR: Supervised Contrastive Learning for Clinical Risk Prediction using Electronic Health Records.SCEHR:使用电子健康记录进行临床风险预测的监督对比学习
Proc IEEE Int Conf Data Min. 2021 Dec;2021:857-866. doi: 10.1109/icdm51629.2021.00097.
8
Learning Representation for Clustering Via Prototype Scattering and Positive Sampling.通过原型散射和正采样学习聚类表示。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7509-7524. doi: 10.1109/TPAMI.2022.3216454. Epub 2023 May 5.
9
MixIR: Mixing Input and Representations for Contrastive Learning.MixIR:用于对比学习的输入与表征混合
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8255-8264. doi: 10.1109/TNNLS.2024.3439538. Epub 2025 May 2.
10
Contrastive learning of heart and lung sounds for label-efficient diagnosis.用于高效标签诊断的心肺音对比学习。
Patterns (N Y). 2021 Dec 7;3(1):100400. doi: 10.1016/j.patter.2021.100400. eCollection 2022 Jan 14.

引用本文的文献

1
Self-supervised and few-shot learning for robust bioaerosol monitoring.用于稳健生物气溶胶监测的自监督和少样本学习
Aerobiologia (Bologna). 2025;41(2):263-268. doi: 10.1007/s10453-025-09850-4. Epub 2025 Apr 9.
2
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification.释放预训练扩散模型在通用行人重识别方面的潜力。
Sensors (Basel). 2025 Jan 18;25(2):552. doi: 10.3390/s25020552.
3
Layerwise complexity-matched learning yields an improved model of cortical area V2.逐层复杂度匹配学习产生了一个改进的V2皮质区域模型。
ArXiv. 2024 Jul 18:arXiv:2312.11436v3.
4
AI analysis of super-resolution microscopy: Biological discovery in the absence of ground truth.人工智能分析超分辨率显微镜:在没有真值的情况下进行生物学发现。
J Cell Biol. 2024 Aug 5;223(8). doi: 10.1083/jcb.202311073. Epub 2024 Jun 12.
5
ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast.ACTION++:利用自适应解剖对比度改进半监督医学图像分割
Med Image Comput Comput Assist Interv. 2023 Oct;14223:194-205. doi: 10.1007/978-3-031-43901-8_19. Epub 2023 Oct 1.
6
Finding the semantic similarity in single-particle diffraction images using self-supervised contrastive projection learning.使用自监督对比投影学习在单粒子衍射图像中寻找语义相似性。
NPJ Comput Mater. 2023;9(1):24. doi: 10.1038/s41524-023-00966-0. Epub 2023 Feb 16.
7
DrasCLR: A self-supervised framework of learning disease-related and anatomy-specific representation for 3D lung CT images.DrasCLR:一种用于 3D 肺部 CT 图像的学习疾病相关和解剖特定表示的自监督框架。
Med Image Anal. 2024 Feb;92:103062. doi: 10.1016/j.media.2023.103062. Epub 2023 Dec 9.
8
Challenges of AI driven diagnosis of chest X-rays transmitted through smart phones: a case study in COVID-19.智能手机传输的胸部 X 光片人工智能诊断的挑战:COVID-19 案例研究。
Sci Rep. 2023 Oct 23;13(1):18102. doi: 10.1038/s41598-023-44653-y.
9
Generalization of vision pre-trained models for histopathology.用于组织病理学的视觉预训练模型的泛化。
Sci Rep. 2023 Apr 13;13(1):6065. doi: 10.1038/s41598-023-33348-z.
10
Reverse translation of artificial intelligence in glaucoma: Connecting basic science with clinical applications.青光眼人工智能的反向翻译:连接基础科学与临床应用。
Front Ophthalmol (Lausanne). 2023;2. doi: 10.3389/fopht.2022.1057896. Epub 2023 Jan 4.

本文引用的文献

1
Contrastive Learning With Stronger Augmentations.对比增强的对比学习。
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5549-5560. doi: 10.1109/TPAMI.2022.3203630. Epub 2023 Apr 3.
2
Context Matters: Graph-based Self-supervised Representation Learning for Medical Images.上下文很重要:基于图的医学图像自监督表示学习
Proc AAAI Conf Artif Intell. 2021 Feb;35(6):4874-4882.
3
Genetic epidemiology of COPD (COPDGene) study design.COPD(COPDGene)遗传流行病学研究设计。
COPD. 2010 Feb;7(1):32-43. doi: 10.3109/15412550903499522.

对比学习能避免捷径解决方案吗?

Can contrastive learning avoid shortcut solutions?

作者信息

Robinson Joshua, Sun Li, Yu Ke, Batmanghelich Kayhan, Jegelka Stefanie, Sra Suvrit

机构信息

Massachusetts Institute of Technology.

University of Pittsburgh.

出版信息

Adv Neural Inf Process Syst. 2021 Dec;34:4974-4986.

PMID:35546903
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9089441/
Abstract

The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted. However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via "shortcuts", i.e., by inadvertently suppressing important predictive features. We find that feature extraction is influenced by the of the so-called instance discrimination task (i.e., the task of discriminating pairs of similar points from pairs of dissimilar ones). Although harder pairs improve the representation of some features, the improvement comes at the cost of suppressing previously well represented features. In response, we propose (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features. Empirically, we observe that IFM reduces feature suppression, and as a result improves performance on vision and medical imaging tasks. The code is available at: https://github.com/joshr17/IFM.

摘要

通过对比学习学到的表征的泛化性关键取决于所提取的数据特征。然而,我们观察到对比损失并不总能充分引导哪些特征被提取,这种行为可能会通过“捷径”对下游任务的性能产生负面影响,也就是说,会无意中抑制重要的预测特征。我们发现特征提取受所谓实例判别任务(即区分相似点对和不相似点对的任务)的 影响。虽然更难的点对能改善某些特征的表征,但这种改善是以抑制先前表征良好的特征为代价的。作为回应,我们提出了实例特征调制(IFM),一种改变正样本和负样本以引导对比模型捕捉更广泛预测特征的方法。从经验上看,我们观察到IFM减少了特征抑制,从而提高了视觉和医学成像任务的性能。代码可在以下网址获取:https://github.com/joshr17/IFM 。

注

原文中“受……的 影响”这里少了个单词,我按照正常逻辑翻译了。