• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

广义参数对比学习

Generalized Parametric Contrastive Learning.

作者信息

Cui Jiequan, Zhong Zhisheng, Tian Zhuotao, Liu Shu, Yu Bei, Jia Jiaya

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7463-7474. doi: 10.1109/TPAMI.2023.3278694. Epub 2024 Nov 6.

DOI:10.1109/TPAMI.2023.3278694
PMID:37216259
Abstract

In this paper, we propose the Generalized Parametric Contrastive Learning (GPaCo/PaCo) which works well on both imbalanced and balanced data. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our GPaCo/PaCo loss under a balanced setting. Our analysis demonstrates that GPaCo/PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models from CNNs to vision transformers trained with GPaCo loss show better generalization performance and stronger robustness compared with MAE models. Moreover, GPaCo can be applied to semantic segmentation task and obvious improvements are observed on 4 most popular benchmarks.

摘要

在本文中,我们提出了广义参数对比学习(GPaCo/PaCo),它在不平衡数据和平衡数据上都表现良好。基于理论分析,我们观察到有监督对比损失倾向于偏向高频类别,从而增加了不平衡学习的难度。我们引入了一组参数化的类级可学习中心,从优化的角度进行重新平衡。此外,我们在平衡设置下分析了我们的GPaCo/PaCo损失。我们的分析表明,随着更多样本与其相应中心聚集在一起,GPaCo/PaCo可以自适应地增强将同一类样本推近的强度,并有利于困难示例学习。在长尾基准上的实验表明了长尾识别的新的当前最优水平。在完整的ImageNet上,与MAE模型相比,使用GPaCo损失训练的从卷积神经网络到视觉Transformer的模型表现出更好的泛化性能和更强的鲁棒性。此外,GPaCo可以应用于语义分割任务,并且在4个最流行的基准上观察到了明显的改进。

相似文献

1
Generalized Parametric Contrastive Learning.广义参数对比学习
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7463-7474. doi: 10.1109/TPAMI.2023.3278694. Epub 2024 Nov 6.
2
ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast.ACTION++:利用自适应解剖对比度改进半监督医学图像分割
Med Image Comput Comput Assist Interv. 2023 Oct;14223:194-205. doi: 10.1007/978-3-031-43901-8_19. Epub 2023 Oct 1.
3
A Comprehensive Framework for Long-Tailed Learning via Pretraining and Normalization.一种通过预训练和归一化实现长尾学习的综合框架。
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3437-3449. doi: 10.1109/TNNLS.2022.3192475. Epub 2024 Feb 29.
4
A dual-branch model with inter- and intra-branch contrastive loss for long-tailed recognition.用于长尾识别的具有分支间和分支内对比损失的双分支模型。
Neural Netw. 2023 Nov;168:214-222. doi: 10.1016/j.neunet.2023.09.022. Epub 2023 Sep 21.
5
Boundary-aware information maximization for self-supervised medical image segmentation.用于自监督医学图像分割的边界感知信息最大化
Med Image Anal. 2024 May;94:103150. doi: 10.1016/j.media.2024.103150. Epub 2024 Mar 28.
6
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
7
Probabilistic Contrastive Learning for Long-Tailed Visual Recognition.用于长尾视觉识别的概率对比学习
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):5890-5904. doi: 10.1109/TPAMI.2024.3369102. Epub 2024 Aug 6.
8
Enhanced Long-Tailed Recognition With Contrastive CutMix Augmentation.基于对比CutMix增强的长尾识别优化
IEEE Trans Image Process. 2024;33:4215-4230. doi: 10.1109/TIP.2024.3425148. Epub 2024 Jul 22.
9
Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound.用于胎儿超声的解剖学感知对比表示学习
Comput Vis ECCV. 2022 Oct;2022:422-436. doi: 10.1007/978-3-031-25066-8_23.
10
Multi-task contrastive learning for semi-supervised medical image segmentation with multi-scale uncertainty estimation.用于半监督医学图像分割的多任务对比学习与多尺度不确定性估计
Phys Med Biol. 2023 Sep 8;68(18). doi: 10.1088/1361-6560/acf10f.

引用本文的文献

1
ETFT: Equiangular Tight Frame Transformer for Imbalanced Semantic Segmentation.ETFT:用于不平衡语义分割的等角紧框架变换器
Sensors (Basel). 2024 Oct 28;24(21):6913. doi: 10.3390/s24216913.
2
Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective.从多任务视角看自然保护图像数据中的细粒度跨模态语义一致性
Sensors (Basel). 2024 May 14;24(10):3130. doi: 10.3390/s24103130.
3
Improving imbalance classification via ensemble learning based on two-stage learning.基于两阶段学习的集成学习改善不平衡分类
Front Comput Neurosci. 2024 Jan 5;17:1296897. doi: 10.3389/fncom.2023.1296897. eCollection 2023.