• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探究对比对学习在监督学习、半监督学习和自监督学习中的前沿进展。

Investigating Contrastive Pair Learning's Frontiers in Supervised, Semisupervised, and Self-Supervised Learning.

作者信息

Sabiri Bihi, Khtira Amal, El Asri Bouchra, Rhanoui Maryem

机构信息

IMS Team, ADMIR Laboratory, Rabat IT Center, ENSIAS, Mohammed V University in Rabat, Rabat 10000, Morocco.

LASTIMI Laboratory, EST Salé, Mohammed V University in Rabat, Salé 11060, Morocco.

出版信息

J Imaging. 2024 Aug 13;10(8):196. doi: 10.3390/jimaging10080196.

DOI:10.3390/jimaging10080196
PMID:39194985
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11355693/
Abstract

In recent years, contrastive learning has been a highly favored method for self-supervised representation learning, which significantly improves the unsupervised training of deep image models. Self-supervised learning is a subset of unsupervised learning in which the learning process is supervised by creating pseudolabels from the data themselves. Using supervised final adjustments after unsupervised pretraining is one way to take the most valuable information from a vast collection of unlabeled data and teach from a small number of labeled instances. This study aims firstly to compare contrastive learning with other traditional learning models; secondly to demonstrate by experimental studies the superiority of contrastive learning during classification; thirdly to fine-tune performance using pretrained models and appropriate hyperparameter selection; and finally to address the challenge of using contrastive learning techniques to produce data representations with semantic meaning that are independent of irrelevant factors like position, lighting, and background. Relying on contrastive techniques, the model efficiently captures meaningful representations by discerning similarities and differences between modified copies of the same image. The proposed strategy, involving unsupervised pretraining followed by supervised fine-tuning, improves the robustness, accuracy, and knowledge extraction of deep image models. The results show that even with a modest 5% of data labeled, the semisupervised model achieves an accuracy of 57.72%. However, the use of supervised learning with a contrastive approach and careful hyperparameter tuning increases accuracy to 85.43%. Further adjustment of the hyperparameters resulted in an excellent accuracy of 88.70%.

摘要

近年来,对比学习一直是自监督表示学习中备受青睐的方法,它显著改进了深度图像模型的无监督训练。自监督学习是无监督学习的一个子集,其中学习过程通过从数据本身创建伪标签来进行监督。在无监督预训练后使用监督式最终调整,是从大量未标记数据中获取最有价值信息并从少量标记实例中学习的一种方法。本研究旨在:首先,将对比学习与其他传统学习模型进行比较;其次,通过实验研究证明对比学习在分类过程中的优越性;第三,使用预训练模型和适当的超参数选择来微调性能;最后,应对使用对比学习技术生成与位置、光照和背景等无关因素无关的具有语义意义的数据表示的挑战。依靠对比技术,该模型通过辨别同一图像的修改副本之间的异同来有效捕获有意义的表示。所提出的策略,即先进行无监督预训练然后进行监督微调,提高了深度图像模型的鲁棒性、准确性和知识提取能力。结果表明,即使只有5%的数据被标记,半监督模型的准确率也能达到57.72%。然而,使用对比方法的监督学习和仔细的超参数调整将准确率提高到了85.43%。进一步调整超参数后,准确率达到了优异的88.70%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/05063fa4d12e/jimaging-10-00196-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/22ae25c6bb0b/jimaging-10-00196-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/450cff20cb5f/jimaging-10-00196-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/66c0a79f02fe/jimaging-10-00196-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/e02632fac0e0/jimaging-10-00196-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/626b4878ae2c/jimaging-10-00196-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/0728b2ec0901/jimaging-10-00196-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/937afb5a51c3/jimaging-10-00196-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/d2405acf964b/jimaging-10-00196-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/4755df1cbbd2/jimaging-10-00196-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/5852421450e2/jimaging-10-00196-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/ec5f99bc3c54/jimaging-10-00196-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/192faadc27d7/jimaging-10-00196-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/a77a67795311/jimaging-10-00196-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/f8809e553252/jimaging-10-00196-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/0f47fcbd38d3/jimaging-10-00196-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/54f0ea2894c1/jimaging-10-00196-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/ee5781b50808/jimaging-10-00196-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/53a450cc89fe/jimaging-10-00196-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/5cc9e9e958fa/jimaging-10-00196-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/05063fa4d12e/jimaging-10-00196-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/22ae25c6bb0b/jimaging-10-00196-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/450cff20cb5f/jimaging-10-00196-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/66c0a79f02fe/jimaging-10-00196-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/e02632fac0e0/jimaging-10-00196-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/626b4878ae2c/jimaging-10-00196-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/0728b2ec0901/jimaging-10-00196-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/937afb5a51c3/jimaging-10-00196-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/d2405acf964b/jimaging-10-00196-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/4755df1cbbd2/jimaging-10-00196-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/5852421450e2/jimaging-10-00196-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/ec5f99bc3c54/jimaging-10-00196-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/192faadc27d7/jimaging-10-00196-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/a77a67795311/jimaging-10-00196-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/f8809e553252/jimaging-10-00196-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/0f47fcbd38d3/jimaging-10-00196-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/54f0ea2894c1/jimaging-10-00196-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/ee5781b50808/jimaging-10-00196-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/53a450cc89fe/jimaging-10-00196-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/5cc9e9e958fa/jimaging-10-00196-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/05063fa4d12e/jimaging-10-00196-g020.jpg

相似文献

1
Investigating Contrastive Pair Learning's Frontiers in Supervised, Semisupervised, and Self-Supervised Learning.探究对比对学习在监督学习、半监督学习和自监督学习中的前沿进展。
J Imaging. 2024 Aug 13;10(8):196. doi: 10.3390/jimaging10080196.
2
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
3
Weakly-supervised learning-based pathology detection and localization in 3D chest CT scans.基于弱监督学习的三维胸部 CT 扫描中的病理学检测和定位。
Med Phys. 2024 Nov;51(11):8272-8282. doi: 10.1002/mp.17302. Epub 2024 Aug 14.
4
Efficient deep learning-based automated diagnosis from echocardiography with contrastive self-supervised learning.基于对比自监督学习的高效超声心动图深度学习自动诊断
Commun Med (Lond). 2024 Jul 6;4(1):133. doi: 10.1038/s43856-024-00538-3.
5
Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation.减少磁共振成像中的标注负担:一种新的基于磁共振对比引导的对比学习方法用于图像分割。
Med Phys. 2024 Apr;51(4):2707-2720. doi: 10.1002/mp.16820. Epub 2023 Nov 13.
6
Improving fine-tuning of self-supervised models with Contrastive Initialization.通过对比初始化提高自监督模型的微调效果。
Neural Netw. 2023 Feb;159:198-207. doi: 10.1016/j.neunet.2022.12.012. Epub 2022 Dec 23.
7
Improving the Classification Performance of Esophageal Disease on Small Dataset by Semi-supervised Efficient Contrastive Learning.基于半监督高效对比学习提高小数据集上食管疾病的分类性能。
J Med Syst. 2021 Nov 22;46(1):4. doi: 10.1007/s10916-021-01782-z.
8
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
9
SMICLR: Contrastive Learning on Multiple Molecular Representations for Semisupervised and Unsupervised Representation Learning.SMICLR:基于多种分子表示的对比学习用于半监督和无监督表示学习。
J Chem Inf Model. 2022 Sep 12;62(17):3948-3960. doi: 10.1021/acs.jcim.2c00521. Epub 2022 Aug 31.
10
A Visual Encoding Model Based on Contrastive Self-Supervised Learning for Human Brain Activity along the Ventral Visual Stream.基于对比自监督学习的人类腹侧视觉通路脑活动视觉编码模型
Brain Sci. 2021 Jul 29;11(8):1004. doi: 10.3390/brainsci11081004.

引用本文的文献

1
Hybrid Quality-Based Recommender Systems: A Systematic Literature Review.基于混合质量的推荐系统:系统文献综述
J Imaging. 2025 Jan 7;11(1):12. doi: 10.3390/jimaging11010012.

本文引用的文献

1
Contrastive Transfer Learning for Prediction of Adverse Events in Hospitalized Patients.对比迁移学习在预测住院患者不良事件中的应用。
IEEE J Transl Eng Health Med. 2023 Dec 18;12:215-224. doi: 10.1109/JTEHM.2023.3344035. eCollection 2024.
2
CL3: Generalization of Contrastive Loss for Lifelong Learning.CL3:用于终身学习的对比损失的泛化
J Imaging. 2023 Nov 23;9(12):259. doi: 10.3390/jimaging9120259.
3
Universum-Inspired Supervised Contrastive Learning.受通用集启发的监督对比学习
IEEE Trans Image Process. 2023;32:4275-4286. doi: 10.1109/TIP.2023.3290514. Epub 2023 Jul 27.
4
StainCUT: Stain Normalization with Contrastive Learning.StainCUT:基于对比学习的染色归一化
J Imaging. 2022 Jul 20;8(7):202. doi: 10.3390/jimaging8070202.
5
Semi-supervised learning with natural language processing for right ventricle classification in echocardiography-a scalable approach.用于超声心动图中右心室分类的自然语言处理半监督学习——一种可扩展的方法。
Comput Biol Med. 2022 Apr;143:105282. doi: 10.1016/j.compbiomed.2022.105282. Epub 2022 Feb 15.
6
Deep metric learning for otitis media classification.用于中耳炎分类的深度度量学习。
Med Image Anal. 2021 Jul;71:102034. doi: 10.1016/j.media.2021.102034. Epub 2021 Mar 14.
7
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.
8
Urban multicultural trauma patients.城市多元文化创伤患者。
ASHA. 1992 Apr;34(4):37-40, 42.