• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于少样本类别增量学习的模型注意力扩展

Model Attention Expansion for Few-Shot Class-Incremental Learning.

作者信息

Wang Xuan, Ji Zhong, Yu Yunlong, Pang Yanwei, Han Jungong

出版信息

IEEE Trans Image Process. 2024;33:4419-4431. doi: 10.1109/TIP.2024.3434475. Epub 2024 Aug 6.

DOI:10.1109/TIP.2024.3434475
PMID:39088502
Abstract

Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning new knowledge from limited training examples without forgetting previous knowledge. However, we observe that existing methods face a challenge known as supervision collapse, where the model disproportionately emphasizes class-specific features of base classes at the detriment of novel class representations, leading to restricted cognitive capabilities. To alleviate this issue, we propose a new framework, Model aTtention Expansion for Few-Shot Class-Incremental Learning (MTE-FSCIL), aimed at expanding the model attention fields to improve transferability without compromising the discriminative capability for base classes. Specifically, the framework adopts a dual-stage training strategy, comprising pre-training and meta-training stages. In the pre-training stage, we present a new regularization technique, named the Reserver (RS) loss, to expand the global perception and reduce over-reliance on class-specific features by amplifying feature map activations. During the meta-training stage, we introduce the Repeller (RP) loss, a novel pair-based loss that promotes variation in representations and improves the model's recognition of sample uniqueness by scattering intra-class samples within the embedding space. Furthermore, we propose a Transformational Adaptation (TA) strategy to enable continuous incorporation of new knowledge from downstream tasks, thus facilitating cross-task knowledge transfer. Extensive experimental results on mini-ImageNet, CIFAR100, and CUB200 datasets demonstrate that our proposed framework consistently outperforms the state-of-the-art methods.

摘要

少样本类别增量学习(FSCIL)旨在从有限的训练示例中增量学习新知识,同时不遗忘先前的知识。然而,我们观察到现有方法面临一种称为监督崩溃的挑战,即模型过度强调基类的特定类别特征,从而损害了新类别表示,导致认知能力受限。为了缓解这个问题,我们提出了一个新的框架,即用于少样本类别增量学习的模型注意力扩展(MTE-FSCIL),旨在扩展模型的注意力领域,以提高可迁移性,同时不损害对基类的判别能力。具体而言,该框架采用了双阶段训练策略,包括预训练和元训练阶段。在预训练阶段,我们提出了一种新的正则化技术,称为保留器(RS)损失,通过放大特征图激活来扩展全局感知并减少对特定类别特征的过度依赖。在元训练阶段,我们引入了排斥器(RP)损失,这是一种新颖的基于对的损失,通过在嵌入空间内分散类内样本,促进表示的变化并提高模型对样本独特性的识别。此外,我们提出了一种变换适应(TA)策略,以实现从下游任务中持续纳入新知识,从而促进跨任务知识转移。在mini-ImageNet、CIFAR100和CUB200数据集上的大量实验结果表明,我们提出的框架始终优于当前的先进方法。

相似文献

1
Model Attention Expansion for Few-Shot Class-Incremental Learning.用于少样本类别增量学习的模型注意力扩展
IEEE Trans Image Process. 2024;33:4419-4431. doi: 10.1109/TIP.2024.3434475. Epub 2024 Aug 6.
2
Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks.通过对多阶段任务进行采样实现少样本类别增量学习。
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):12816-12831. doi: 10.1109/TPAMI.2022.3200865.
3
Learnable Distribution Calibration for Few-Shot Class-Incremental Learning.用于少样本类别增量学习的可学习分布校准
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12699-12706. doi: 10.1109/TPAMI.2023.3273291. Epub 2023 Sep 5.
4
Few-Shot Class-Incremental Learning for Medical Time Series Classification.用于医学时间序列分类的少样本类别增量学习
IEEE J Biomed Health Inform. 2023 Feb 22;PP. doi: 10.1109/JBHI.2023.3247861.
5
DyCR: A Dynamic Clustering and Recovering Network for Few-Shot Class-Incremental Learning.DyCR:一种用于少样本类别增量学习的动态聚类与恢复网络
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7116-7129. doi: 10.1109/TNNLS.2024.3394844. Epub 2025 Apr 4.
6
Few-shot Class-incremental Learning for Retinal Disease Recognition.用于视网膜疾病识别的少样本类别增量学习
IEEE J Biomed Health Inform. 2024 Sep 18;PP. doi: 10.1109/JBHI.2024.3457915.
7
Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration.通过高效原型重放和校准实现少样本类别增量学习
Entropy (Basel). 2023 May 10;25(5):776. doi: 10.3390/e25050776.
8
Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning.用于半监督少样本类别增量学习的不确定性感知蒸馏
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14259-14272. doi: 10.1109/TNNLS.2023.3277018. Epub 2024 Oct 7.
9
Mitigate forgetting in few-shot class-incremental learning using different image views.使用不同的图像视图减轻小样本增量学习中的遗忘。
Neural Netw. 2023 Aug;165:999-1009. doi: 10.1016/j.neunet.2023.06.043. Epub 2023 Jul 5.
10
Memorizing Complementation Network for Few-Shot Class-Incremental Learning.用于少样本类别增量学习的记忆互补网络。
IEEE Trans Image Process. 2023;32:937-948. doi: 10.1109/TIP.2023.3236160. Epub 2023 Jan 23.