• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过分布校准弥合少样本学习与多样本学习之间的差距

Bridging the Gap Between Few-Shot and Many-Shot Learning via Distribution Calibration.

作者信息

Yang Shuo, Wu Songhua, Liu Tongliang, Xu Min

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9830-9843. doi: 10.1109/TPAMI.2021.3132021. Epub 2022 Nov 7.

DOI:10.1109/TPAMI.2021.3132021
PMID:34860647
Abstract

A major gap between few-shot and many-shot learning is the data distribution empirically oserved by the model during training. In few-shot learning, the learned model can easily become over-fitted based on the biased distribution formed by only a few training examples, while the ground-truth data distribution is more accurately uncovered in many-shot learning to learn a well-generalized model. In this paper, we propose to calibrate the distribution of these few-sample classes to be more unbiased to alleviate such an over-fitting problem. The distribution calibration is achieved by transferring statistics from the classes with sufficient examples to those few-sample classes. After calibration, an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. Specifically, we assume every dimension in the feature representation from the same class follows a Gaussian distribution so that the mean and the variance of the distribution can borrow from that of similar classes whose statistics are better estimated with an adequate number of samples. Extensive experiments on three datasets, miniImageNet, tieredImageNet, and CUB, show that a simple linear classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy by a large margin. Besides the favorable performance, the proposed method also exhibits high flexibility by showing consistent accuracy improvement when it is built on top of any off-the-shelf pretrained feature extractors and classification models without extra learnable parameters. The visualization of these generated features demonstrates that our calibrated distribution is an accurate estimation thus the generalization ability gain is convincing. We also establish a generalization error bound for the proposed distribution-calibration-based few-shot learning, which consists of the distribution assumption error, the distribution approximation error, and the estimation error. This generalization error bound theoretically justifies the effectiveness of the proposed method.

摘要

少样本学习和多样本学习之间的一个主要差距在于模型在训练期间凭经验观察到的数据分布。在少样本学习中,基于仅少数训练示例形成的有偏差分布,学习到的模型很容易变得过度拟合,而在多样本学习中能更准确地揭示真实数据分布,从而学习到一个泛化能力良好的模型。在本文中,我们建议校准这些少样本类别的分布,使其偏差更小,以缓解这种过度拟合问题。通过将统计信息从有足够示例的类别转移到那些少样本类别来实现分布校准。校准后,可以从校准后的分布中采样足够数量的示例来扩展分类器的输入。具体来说,我们假设来自同一类别的特征表示中的每个维度都遵循高斯分布,这样分布的均值和方差可以借鉴统计信息通过足够数量样本得到更好估计的相似类别的均值和方差。在三个数据集miniImageNet、tieredImageNet和CUB上进行的大量实验表明,使用从我们校准后的分布中采样的特征训练的简单线性分类器能够大幅超越当前的最优准确率。除了良好的性能外,所提出的方法还具有很高的灵活性,因为当它基于任何现成的预训练特征提取器和分类模型构建时,在没有额外可学习参数的情况下,准确率也能持续提高。对这些生成特征的可视化表明,我们校准后的分布是一个准确的估计,因此泛化能力的提升是令人信服的。我们还为所提出的基于分布校准的少样本学习建立了一个泛化误差界,它由分布假设误差、分布近似误差和估计误差组成。这个泛化误差界从理论上证明了所提出方法的有效性。

相似文献

1
Bridging the Gap Between Few-Shot and Many-Shot Learning via Distribution Calibration.通过分布校准弥合少样本学习与多样本学习之间的差距
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9830-9843. doi: 10.1109/TPAMI.2021.3132021. Epub 2022 Nov 7.
2
Learnable Distribution Calibration for Few-Shot Class-Incremental Learning.用于少样本类别增量学习的可学习分布校准
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12699-12706. doi: 10.1109/TPAMI.2023.3273291. Epub 2023 Sep 5.
3
Few-Shot Learning With a Strong Teacher.借助强大教师的少样本学习。
IEEE Trans Pattern Anal Mach Intell. 2024 Mar;46(3):1425-1440. doi: 10.1109/TPAMI.2022.3160362. Epub 2024 Feb 6.
4
How to Trust Unlabeled Data? Instance Credibility Inference for Few-Shot Learning.如何信任未标记的数据?小样本学习中的实例可信度推断。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6240-6253. doi: 10.1109/TPAMI.2021.3086140. Epub 2022 Sep 14.
5
FSCC: Few-Shot Learning for Macromolecule Classification Based on Contrastive Learning and Distribution Calibration in Cryo-Electron Tomography.FSCC:基于冷冻电子断层扫描中对比学习和分布校准的大分子分类少样本学习
Front Mol Biosci. 2022 Jul 5;9:931949. doi: 10.3389/fmolb.2022.931949. eCollection 2022.
6
Boosting few-shot rare skin disease classification via self-supervision and distribution calibration.通过自我监督和分布校准提升少样本罕见皮肤病分类
Biomed Eng Lett. 2024 May 20;14(4):877-889. doi: 10.1007/s13534-024-00383-2. eCollection 2024 Jul.
7
Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration.通过高效原型重放和校准实现少样本类别增量学习
Entropy (Basel). 2023 May 10;25(5):776. doi: 10.3390/e25050776.
8
Improving Embedding Generalization in Few-Shot Learning With Instance Neighbor Constraints.通过实例邻居约束改进少样本学习中的嵌入泛化。
IEEE Trans Image Process. 2023;32:5197-5208. doi: 10.1109/TIP.2023.3310329. Epub 2023 Sep 15.
9
Learning to Learn Adaptive Classifier-Predictor for Few-Shot Learning.自适应分类器-预测器的元学习以实现小样本学习。
IEEE Trans Neural Netw Learn Syst. 2021 Aug;32(8):3458-3470. doi: 10.1109/TNNLS.2020.3011526. Epub 2021 Aug 3.
10
Enhancing Few-Shot Learning in Lightweight Models via Dual-Faceted Knowledge Distillation.通过双方面知识蒸馏增强轻量级模型中的少样本学习
Sensors (Basel). 2024 Mar 12;24(6):1815. doi: 10.3390/s24061815.

引用本文的文献

1
A novel temporal classification prototype network for few-shot bearing fault detection.一种用于少样本轴承故障检测的新型时态分类原型网络。
Sci Rep. 2025 Apr 24;15(1):14321. doi: 10.1038/s41598-025-98963-4.
2
Few-shot learning for joint model in underwater acoustic target recognition.水下声学目标识别中联合模型的少样本学习
Sci Rep. 2023 Oct 16;13(1):17502. doi: 10.1038/s41598-023-44641-2.
3
Feature augmentation based on information fusion rectification for few-shot image classification.基于信息融合校正的少样本图像分类特征增强。
Sci Rep. 2023 Mar 3;13(1):3607. doi: 10.1038/s41598-023-30398-1.
4
FSCC: Few-Shot Learning for Macromolecule Classification Based on Contrastive Learning and Distribution Calibration in Cryo-Electron Tomography.FSCC:基于冷冻电子断层扫描中对比学习和分布校准的大分子分类少样本学习
Front Mol Biosci. 2022 Jul 5;9:931949. doi: 10.3389/fmolb.2022.931949. eCollection 2022.