• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于持续学习的原型引导记忆回放

Prototype-Guided Memory Replay for Continual Learning.

作者信息

Ho Stella, Liu Ming, Du Lan, Gao Longxiang, Xiang Yong

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3246049
PMID:37028080
Abstract

Continual learning (CL) is a machine learning paradigm that accumulates knowledge while learning sequentially. The main challenge in CL is catastrophic forgetting of previously seen tasks, which occurs due to shifts in the probability distribution. To retain knowledge, existing CL models often save some past examples and revisit them while learning new tasks. As a result, the size of saved samples dramatically increases as more samples are seen. To address this issue, we introduce an efficient CL method by storing only a few samples to achieve good performance. Specifically, we propose a dynamic prototype-guided memory replay (PMR) module, where synthetic prototypes serve as knowledge representations and guide the sample selection for memory replay. This module is integrated into an online meta-learning (OML) model for efficient knowledge transfer. We conduct extensive experiments on the CL benchmark text classification datasets and examine the effect of training set order on the performance of CL models. The experimental results demonstrate the superiority our approach in terms of accuracy and efficiency.

摘要

持续学习(CL)是一种机器学习范式,它在顺序学习的同时积累知识。CL中的主要挑战是对先前见过的任务的灾难性遗忘,这是由于概率分布的变化而发生的。为了保留知识,现有的CL模型通常会保存一些过去的示例,并在学习新任务时重新审视它们。结果,随着看到的样本越来越多,保存样本的大小会急剧增加。为了解决这个问题,我们通过只存储少量样本引入了一种高效的CL方法,以实现良好的性能。具体来说,我们提出了一个动态原型引导的记忆重放(PMR)模块,其中合成原型作为知识表示,并指导用于记忆重放的样本选择。该模块被集成到一个在线元学习(OML)模型中,以实现高效的知识转移。我们在CL基准文本分类数据集上进行了广泛的实验,并研究了训练集顺序对CL模型性能的影响。实验结果证明了我们的方法在准确性和效率方面的优越性。

相似文献

1
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.
2
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
3
Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration.通过高效原型重放和校准实现少样本类别增量学习
Entropy (Basel). 2023 May 10;25(5):776. doi: 10.3390/e25050776.
4
Online Continual Learning in Acoustic Scene Classification: An Empirical Study.声学场景分类中的在线持续学习:一项实证研究。
Sensors (Basel). 2023 Aug 3;23(15):6893. doi: 10.3390/s23156893.
5
Continual Learning With Knowledge Distillation: A Survey.基于知识蒸馏的持续学习:一项综述。
IEEE Trans Neural Netw Learn Syst. 2024 Oct 18;PP. doi: 10.1109/TNNLS.2024.3476068.
6
Balanced Destruction-Reconstruction Dynamics for Memory-Replay Class Incremental Learning.
IEEE Trans Image Process. 2024;33:4966-4981. doi: 10.1109/TIP.2024.3451932. Epub 2024 Sep 11.
7
LwF-ECG: Learning-without-forgetting approach for electrocardiogram heartbeat classification based on memory with task selector.基于记忆与任务选择器的遗忘学习心电图心拍分类方法
Comput Biol Med. 2021 Oct;137:104807. doi: 10.1016/j.compbiomed.2021.104807. Epub 2021 Aug 27.
8
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning.基于映射的经验回放:强化学习中灾难性遗忘的一种内存高效解决方案。
Front Neurorobot. 2023 Jun 27;17:1127642. doi: 10.3389/fnbot.2023.1127642. eCollection 2023.
9
StaRS: Learning a Stable Representation Space for Continual Relation Classification.
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9670-9683. doi: 10.1109/TNNLS.2024.3442236. Epub 2025 May 2.
10
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.

引用本文的文献

1
Tuned Compositional Feature Replays for Efficient Stream Learning.用于高效流学习的调谐组合特征回放
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3300-3314. doi: 10.1109/TNNLS.2023.3344085. Epub 2025 Feb 6.
2
A Multi-Agent Reinforcement Learning Method for Omnidirectional Walking of Bipedal Robots.一种用于双足机器人全向行走的多智能体强化学习方法。
Biomimetics (Basel). 2023 Dec 16;8(8):616. doi: 10.3390/biomimetics8080616.
3
The Lifespan of Human Activity Recognition Systems for Smart Homes.智能家居中人类活动识别系统的寿命。
Sensors (Basel). 2023 Sep 7;23(18):7729. doi: 10.3390/s23187729.