Suppr超能文献

探究对比对学习在监督学习、半监督学习和自监督学习中的前沿进展。

Investigating Contrastive Pair Learning's Frontiers in Supervised, Semisupervised, and Self-Supervised Learning.

作者信息

Sabiri Bihi, Khtira Amal, El Asri Bouchra, Rhanoui Maryem

机构信息

IMS Team, ADMIR Laboratory, Rabat IT Center, ENSIAS, Mohammed V University in Rabat, Rabat 10000, Morocco.

LASTIMI Laboratory, EST Salé, Mohammed V University in Rabat, Salé 11060, Morocco.

出版信息

J Imaging. 2024 Aug 13;10(8):196. doi: 10.3390/jimaging10080196.

Abstract

In recent years, contrastive learning has been a highly favored method for self-supervised representation learning, which significantly improves the unsupervised training of deep image models. Self-supervised learning is a subset of unsupervised learning in which the learning process is supervised by creating pseudolabels from the data themselves. Using supervised final adjustments after unsupervised pretraining is one way to take the most valuable information from a vast collection of unlabeled data and teach from a small number of labeled instances. This study aims firstly to compare contrastive learning with other traditional learning models; secondly to demonstrate by experimental studies the superiority of contrastive learning during classification; thirdly to fine-tune performance using pretrained models and appropriate hyperparameter selection; and finally to address the challenge of using contrastive learning techniques to produce data representations with semantic meaning that are independent of irrelevant factors like position, lighting, and background. Relying on contrastive techniques, the model efficiently captures meaningful representations by discerning similarities and differences between modified copies of the same image. The proposed strategy, involving unsupervised pretraining followed by supervised fine-tuning, improves the robustness, accuracy, and knowledge extraction of deep image models. The results show that even with a modest 5% of data labeled, the semisupervised model achieves an accuracy of 57.72%. However, the use of supervised learning with a contrastive approach and careful hyperparameter tuning increases accuracy to 85.43%. Further adjustment of the hyperparameters resulted in an excellent accuracy of 88.70%.

摘要

近年来,对比学习一直是自监督表示学习中备受青睐的方法,它显著改进了深度图像模型的无监督训练。自监督学习是无监督学习的一个子集,其中学习过程通过从数据本身创建伪标签来进行监督。在无监督预训练后使用监督式最终调整,是从大量未标记数据中获取最有价值信息并从少量标记实例中学习的一种方法。本研究旨在:首先,将对比学习与其他传统学习模型进行比较;其次,通过实验研究证明对比学习在分类过程中的优越性;第三,使用预训练模型和适当的超参数选择来微调性能;最后,应对使用对比学习技术生成与位置、光照和背景等无关因素无关的具有语义意义的数据表示的挑战。依靠对比技术,该模型通过辨别同一图像的修改副本之间的异同来有效捕获有意义的表示。所提出的策略,即先进行无监督预训练然后进行监督微调,提高了深度图像模型的鲁棒性、准确性和知识提取能力。结果表明,即使只有5%的数据被标记,半监督模型的准确率也能达到57.72%。然而,使用对比方法的监督学习和仔细的超参数调整将准确率提高到了85.43%。进一步调整超参数后,准确率达到了优异的88.70%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11355693/22ae25c6bb0b/jimaging-10-00196-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验