• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

增强一致性并减轻偏差:一种用于增量学习的数据重放方法。

Enhancing consistency and mitigating bias: A data replay approach for incremental learning.

作者信息

Wang Chenyang, Jiang Junjun, Hu Xingyu, Liu Xianming, Ji Xiangyang

机构信息

School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China.

Department of Automation, Tsinghua University, Beijing 100084, China.

出版信息

Neural Netw. 2025 Apr;184:107053. doi: 10.1016/j.neunet.2024.107053. Epub 2024 Dec 20.

DOI:10.1016/j.neunet.2024.107053
PMID:39732067
Abstract

Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks, as old data from previous tasks is unavailable when learning a new task. To address this, some methods propose replaying data from previous tasks during new task learning, typically using extra memory to store replay data. However, it is not expected in practice due to memory constraints and data privacy issues. Instead, data-free replay methods invert samples from the classification model. While effective, these methods face inconsistencies between inverted and real training data, overlooked in recent works. To that effect, we propose to measure the data consistency quantitatively by some simplification and assumptions. Using this measurement, we gain insight to develop a novel loss function that reduces inconsistency. Specifically, the loss minimizes the KL divergence between distributions of inverted and real data under a tied multivariate Gaussian assumption, which is simple to implement in continual learning. Additionally, we observe that old class weight norms decrease continually as learning progresses. We analyze the reasons and propose a regularization term to balance class weights, making old class samples more distinguishable. To conclude, we introduce Consistency-enhanced data replay with a Debiased classifier for class incremental learning (CwD). Extensive experiments on CIFAR-100, Tiny-ImageNet, and ImageNet100 show consistently improved performance of CwD compared to previous approaches.

摘要

深度学习系统在从一系列任务中学习时容易出现灾难性遗忘,因为在学习新任务时,来自先前任务的旧数据不可用。为了解决这个问题,一些方法建议在新任务学习期间重放先前任务的数据,通常使用额外的内存来存储重放数据。然而,由于内存限制和数据隐私问题,在实践中这是不可行的。相反,无数据重放方法通过分类模型对样本进行反向生成。虽然这些方法有效,但它们面临反向生成数据与真实训练数据之间的不一致问题,而这在最近的研究中被忽视了。为此,我们建议通过一些简化和假设来定量测量数据一致性。利用这种测量方法,我们深入了解如何开发一种新的损失函数来减少不一致性。具体来说,在多元高斯分布的假设下,该损失函数最小化反向生成数据与真实数据分布之间的KL散度,这在持续学习中易于实现。此外,我们观察到随着学习的进行,旧类别的权重范数会持续下降。我们分析了原因并提出了一个正则化项来平衡类别权重,使旧类别的样本更具可区分性。总之,我们引入了用于类增量学习的带有去偏置分类器的一致性增强数据重放方法(CwD)。在CIFAR-100、Tiny-ImageNet和ImageNet100上进行的大量实验表明,与先前的方法相比,CwD的性能持续提高。

相似文献

1
Enhancing consistency and mitigating bias: A data replay approach for incremental learning.增强一致性并减轻偏差:一种用于增量学习的数据重放方法。
Neural Netw. 2025 Apr;184:107053. doi: 10.1016/j.neunet.2024.107053. Epub 2024 Dec 20.
2
Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting.记忆召回:一种针对灾难性遗忘的简单神经网络训练框架。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2010-2022. doi: 10.1109/TNNLS.2021.3099700. Epub 2022 May 2.
3
Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.多粒度知识蒸馏和原型一致性正则化的类增量学习。
Neural Netw. 2023 Jul;164:617-630. doi: 10.1016/j.neunet.2023.05.006. Epub 2023 May 11.
4
Generative negative replay for continual learning.生成式负样本重放用于连续学习。
Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.
5
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.
6
Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks.类睡眠无监督重放可减少人工神经网络中的灾难性遗忘。
Nat Commun. 2022 Dec 15;13(1):7742. doi: 10.1038/s41467-022-34938-7.
7
Brain-inspired replay for continual learning with artificial neural networks.基于脑启发的人工神经网络连续学习回放。
Nat Commun. 2020 Aug 13;11(1):4069. doi: 10.1038/s41467-020-17866-2.
8
Continual Learning With Knowledge Distillation: A Survey.基于知识蒸馏的持续学习:一项综述。
IEEE Trans Neural Netw Learn Syst. 2024 Oct 18;PP. doi: 10.1109/TNNLS.2024.3476068.
9
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.
10
Deep Generative Replay-based Class-incremental Continual Learning in sEMG-based Pattern Recognition.基于深度生成重放的表面肌电模式识别中的类增量持续学习
Annu Int Conf IEEE Eng Med Biol Soc. 2024 Jul;2024:1-4. doi: 10.1109/EMBC53108.2024.10781686.

引用本文的文献

1
A Diagnosis-Based Siamese Network for Fault Detection Through Transfer Learning.一种基于诊断的连体网络,用于通过迁移学习进行故障检测。
J Chem Inf Model. 2025 Jul 14;65(13):6703-6720. doi: 10.1021/acs.jcim.5c00809. Epub 2025 Jun 30.