• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

PAMK:用于持续零样本学习的原型增强多教师知识转移网络。

PAMK: Prototype Augmented Multi-Teacher Knowledge Transfer Network for Continual Zero-Shot Learning.

作者信息

Lu Junxin, Sun Shiliang

出版信息

IEEE Trans Image Process. 2024;33:3353-3368. doi: 10.1109/TIP.2024.3403053. Epub 2024 May 31.

DOI:10.1109/TIP.2024.3403053
PMID:38787667
Abstract

Continual zero-shot learning (CZSL) aims to develop a model that accumulates historical knowledge to recognize unseen tasks, while eliminating catastrophic forgetting for seen tasks when learning new tasks. However, existing CZSL methods, while mitigating catastrophic forgetting for old tasks, often lead to negative transfer problem for new tasks by over-focusing on accumulating old knowledge and neglecting the plasticity of the model for learning new tasks. To tackle these problems, we propose PAMK, a prototype augmented multi-teacher knowledge transfer network that strikes a trade-off between recognition stability for old tasks and generalization plasticity for new tasks. PAMK consists of a prototype augmented contrastive generation (PACG) module and a multi-teacher knowledge transfer (MKT) module. To reduce the cumulative semantic decay of the class representation embedding and mitigate catastrophic forgetting, we propose a continual prototype augmentation strategy based on relevance scores in PACG. Furthermore, by introducing the prototype augmented semantic-visual contrastive loss, PACG promotes intra-class compactness for all classes across all tasks. MKT effectively accumulates semantic knowledge learned from old tasks to recognize new tasks via the proposed multi-teacher knowledge transfer, eliminating the negative transfer problem. Extensive experiments on various CZSL settings demonstrate the superior performance of PAMK compared to state-of-the-art methods. In particular, in the practical task-free CZSL setting, PAMK achieves impressive gains of 3.28%, 3.09% and 3.71% in mean harmonic accuracy on the CUB, AWA1, and AWA2 datasets, respectively.

摘要

持续零样本学习(CZSL)旨在开发一种模型,该模型能够积累历史知识以识别未见任务,同时在学习新任务时消除对已见任务的灾难性遗忘。然而,现有的CZSL方法在减轻旧任务的灾难性遗忘时,往往因过度关注积累旧知识而忽视模型学习新任务的可塑性,从而导致新任务的负迁移问题。为了解决这些问题,我们提出了PAMK,即原型增强多教师知识转移网络,它在旧任务的识别稳定性和新任务的泛化可塑性之间取得了平衡。PAMK由一个原型增强对比生成(PACG)模块和一个多教师知识转移(MKT)模块组成。为了减少类表示嵌入的累积语义衰减并减轻灾难性遗忘,我们在PACG中提出了一种基于相关性分数的持续原型增强策略。此外,通过引入原型增强语义-视觉对比损失,PACG促进了所有任务中所有类别的类内紧凑性。MKT通过所提出的多教师知识转移有效地积累从旧任务中学到的语义知识以识别新任务,消除了负迁移问题。在各种CZSL设置上进行的大量实验表明,与现有方法相比,PAMK具有卓越的性能。特别是,在实际的无任务CZSL设置中,PAMK在CUB、AWA1和AWA2数据集上的平均调和准确率分别取得了3.28%、3.09%和3.71%的显著提升。

相似文献

1
PAMK: Prototype Augmented Multi-Teacher Knowledge Transfer Network for Continual Zero-Shot Learning.PAMK:用于持续零样本学习的原型增强多教师知识转移网络。
IEEE Trans Image Process. 2024;33:3353-3368. doi: 10.1109/TIP.2024.3403053. Epub 2024 May 31.
2
Tf-GCZSL: Task-free generalized continual zero-shot learning.无任务广义持续零样本学习(Tf-GCZSL)。
Neural Netw. 2022 Nov;155:487-497. doi: 10.1016/j.neunet.2022.08.034. Epub 2022 Sep 6.
3
Prototype-Augmented Self-Supervised Generative Network for Generalized Zero-Shot Learning.用于广义零样本学习的原型增强自监督生成网络
IEEE Trans Image Process. 2024;33:1938-1951. doi: 10.1109/TIP.2024.3351439. Epub 2024 Mar 14.
4
Contrastive Prototype-Guided Generation for Generalized Zero-Shot Learning.基于对比原型引导的广义零样本学习生成方法。
Neural Netw. 2024 Aug;176:106324. doi: 10.1016/j.neunet.2024.106324. Epub 2024 Apr 15.
5
Adaptive class augmented prototype network for few-shot relation extraction.用于少样本关系抽取的自适应类别增强原型网络。
Neural Netw. 2024 Jan;169:134-142. doi: 10.1016/j.neunet.2023.10.025. Epub 2023 Oct 19.
6
Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning.通过原型关系蒸馏和对比学习进行连续细胞核分割。
IEEE Trans Med Imaging. 2023 Dec;42(12):3794-3804. doi: 10.1109/TMI.2023.3307892. Epub 2023 Nov 30.
7
CL3: Generalization of Contrastive Loss for Lifelong Learning.CL3:用于终身学习的对比损失的泛化
J Imaging. 2023 Nov 23;9(12):259. doi: 10.3390/jimaging9120259.
8
Generative Mixup Networks for Zero-Shot Learning.用于零样本学习的生成式混合网络
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4054-4065. doi: 10.1109/TNNLS.2022.3142181. Epub 2025 Feb 28.
9
Augmented semantic feature based generative network for generalized zero-shot learning.基于增强语义特征的生成网络用于广义零样本学习。
Neural Netw. 2021 Nov;143:1-11. doi: 10.1016/j.neunet.2021.04.014. Epub 2021 Apr 21.
10
Imbalance Mitigation for Continual Learning via Knowledge Decoupling and Dual Enhanced Contrastive Learning.通过知识解耦和双重增强对比学习实现持续学习中的不平衡缓解
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3450-3463. doi: 10.1109/TNNLS.2023.3347477. Epub 2025 Feb 6.