• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Balanced Destruction-Reconstruction Dynamics for Memory-Replay Class Incremental Learning.

作者信息

Zhou Yuhang, Yao Jiangchao, Hong Feng, Zhang Ya, Wang Yanfeng

出版信息

IEEE Trans Image Process. 2024;33:4966-4981. doi: 10.1109/TIP.2024.3451932. Epub 2024 Sep 11.

DOI:10.1109/TIP.2024.3451932
PMID:39236120
Abstract

Class incremental learning (CIL) aims to incrementally update a trained model with the new classes of samples (plasticity) while retaining previously learned ability (stability). To address the most challenging issue in this goal, i.e., catastrophic forgetting, the mainstream paradigm is memory-replay CIL, which consolidates old knowledge by replaying a small number of old classes of samples saved in the memory. Despite effectiveness, the inherent destruction-reconstruction dynamics in memory-replay CIL are an intrinsic limitation: if the old knowledge is severely destructed, it will be quite hard to reconstruct the lossless counterpart. Our theoretical analysis shows that the destruction of old knowledge can be effectively alleviated by balancing the contribution of samples from the current phase and those saved in the memory. Motivated by this theoretical finding, we propose a novel Balanced Destruction-Reconstruction module (BDR) for memory-replay CIL, which can achieve better knowledge reconstruction by reducing the degree of maximal destruction of old knowledge. Specifically, to achieve a better balance between old knowledge and new classes, the proposed BDR module takes into account two factors: the variance in training status across different classes and the quantity imbalance of samples from the current phase and memory. By dynamically manipulating the gradient during training based on these factors, BDR can effectively alleviate knowledge destruction and improve knowledge reconstruction. Extensive experiments on a range of CIL benchmarks have shown that as a lightweight plug-and-play module, BDR can significantly improve the performance of existing state-of-the-art methods with good generalization. Our code is publicly available here.

摘要

相似文献

1
Balanced Destruction-Reconstruction Dynamics for Memory-Replay Class Incremental Learning.
IEEE Trans Image Process. 2024;33:4966-4981. doi: 10.1109/TIP.2024.3451932. Epub 2024 Sep 11.
2
Class-Incremental Learning Method With Fast Update and High Retainability Based on Broad Learning System.基于广义学习系统的具有快速更新和高保持性的类增量学习方法
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11332-11345. doi: 10.1109/TNNLS.2023.3259016. Epub 2024 Aug 5.
3
Imitating the oracle: Towards calibrated model for class incremental learning.模仿神谕:迈向校准模型的类增量学习。
Neural Netw. 2023 Jul;164:38-48. doi: 10.1016/j.neunet.2023.04.010. Epub 2023 Apr 23.
4
Incremental Zero-Shot Learning.增量零样本学习
IEEE Trans Cybern. 2022 Dec;52(12):13788-13799. doi: 10.1109/TCYB.2021.3110369. Epub 2022 Nov 18.
5
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.
6
Memory-Efficient Class-Incremental Learning for Image Classification.用于图像分类的内存高效类增量学习
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5966-5977. doi: 10.1109/TNNLS.2021.3072041. Epub 2022 Oct 5.
7
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
8
Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.多粒度知识蒸馏和原型一致性正则化的类增量学习。
Neural Netw. 2023 Jul;164:617-630. doi: 10.1016/j.neunet.2023.05.006. Epub 2023 May 11.
9
Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks.类睡眠无监督重放可减少人工神经网络中的灾难性遗忘。
Nat Commun. 2022 Dec 15;13(1):7742. doi: 10.1038/s41467-022-34938-7.
10
Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration.通过高效原型重放和校准实现少样本类别增量学习
Entropy (Basel). 2023 May 10;25(5):776. doi: 10.3390/e25050776.