• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过多变量高斯分布中正则化类别的对比学习进行持续学习

Continual Learning by Contrastive Learning of Regularized Classes in Multivariate Gaussian Distributions.

作者信息

Moon Hyung-Jun, Cho Sung-Bae

机构信息

Department of Artificial Intelligence, Yonsei University, 50 Yonsei-ro, Sudaemoon-gu, Seoul 03722, South Korea.

Department of Computer Science, Yonsei University, 50 Yonsei-ro, Sudaemoon-gu, Seoul 03722, South Korea.

出版信息

Int J Neural Syst. 2025 Jun;35(6):2550025. doi: 10.1142/S012906572550025X. Epub 2025 Apr 4.

DOI:10.1142/S012906572550025X
PMID:40186335
Abstract

Deep neural networks struggle with incremental updates due to catastrophic forgetting, where newly acquired knowledge interferes with the learned previously. Continual learning (CL) methods aim to overcome this limitation by effectively updating the model without losing previous knowledge, but they find it difficult to continuously maintain knowledge about previous tasks, resulting from overlapping stored information. In this paper, we propose a CL method that preserves previous knowledge as multivariate Gaussian distributions by independently storing the model's outputs per class and continually reproducing them for future tasks. We enhance the discriminability between classes and ensure the plasticity for future tasks by exploiting contrastive learning and representation regularization. The class-wise spatial means and covariances, distinguished in the latent space, are stored in memory, where the previous knowledge is effectively preserved and reproduced for incremental tasks. Extensive experiments on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet-100 demonstrate that the proposed method achieves accuracies of 93.21%, 77.57%, and 78.15%, respectively, outperforming state-of-the-art CL methods by 2.34 %p, 2.1 %p, and 1.91 %p. Additionally, it achieves the lowest mean forgetting rates across all datasets.

摘要

深度神经网络由于灾难性遗忘而难以进行增量更新,即新获取的知识会干扰先前学到的知识。持续学习(CL)方法旨在通过有效更新模型而不丢失先前知识来克服这一限制,但由于存储信息的重叠,它们发现难以持续维护有关先前任务的知识。在本文中,我们提出了一种CL方法,该方法通过按类独立存储模型的输出并为未来任务持续再现这些输出,将先前知识保存为多元高斯分布。我们通过利用对比学习和表示正则化来增强类之间的可辨别性,并确保未来任务的可塑性。在潜在空间中区分的逐类空间均值和协方差被存储在内存中,在那里先前知识被有效地保存并再现用于增量任务。在CIFAR-10、CIFAR-100和ImageNet-100等基准数据集上进行的大量实验表明,所提出的方法分别实现了93.21%、77.57%和78.15%的准确率,比现有最先进的CL方法分别高出2.34个百分点、2.1个百分点和1.91个百分点。此外,它在所有数据集中实现了最低的平均遗忘率。

相似文献

1
Continual Learning by Contrastive Learning of Regularized Classes in Multivariate Gaussian Distributions.通过多变量高斯分布中正则化类别的对比学习进行持续学习
Int J Neural Syst. 2025 Jun;35(6):2550025. doi: 10.1142/S012906572550025X. Epub 2025 Apr 4.
2
CCSI: Continual Class-Specific Impression for data-free class incremental learning.CCSI:用于无数据类增量学习的持续类特定印象。
Med Image Anal. 2024 Oct;97:103239. doi: 10.1016/j.media.2024.103239. Epub 2024 Jun 15.
3
Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting.记忆召回:一种针对灾难性遗忘的简单神经网络训练框架。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2010-2022. doi: 10.1109/TNNLS.2021.3099700. Epub 2022 May 2.
4
Enhancing consistency and mitigating bias: A data replay approach for incremental learning.增强一致性并减轻偏差:一种用于增量学习的数据重放方法。
Neural Netw. 2025 Apr;184:107053. doi: 10.1016/j.neunet.2024.107053. Epub 2024 Dec 20.
5
Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning.通过原型关系蒸馏和对比学习进行连续细胞核分割。
IEEE Trans Med Imaging. 2023 Dec;42(12):3794-3804. doi: 10.1109/TMI.2023.3307892. Epub 2023 Nov 30.
6
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.
7
Continual Learning With Knowledge Distillation: A Survey.基于知识蒸馏的持续学习:一项综述。
IEEE Trans Neural Netw Learn Syst. 2024 Oct 18;PP. doi: 10.1109/TNNLS.2024.3476068.
8
A Continual Learning Survey: Defying Forgetting in Classification Tasks.持续学习调查:在分类任务中对抗遗忘
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3366-3385. doi: 10.1109/TPAMI.2021.3057446. Epub 2022 Jun 3.
9
Imbalance Mitigation for Continual Learning via Knowledge Decoupling and Dual Enhanced Contrastive Learning.通过知识解耦和双重增强对比学习实现持续学习中的不平衡缓解
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3450-3463. doi: 10.1109/TNNLS.2023.3347477. Epub 2025 Feb 6.
10
Variational Data-Free Knowledge Distillation for Continual Learning.用于持续学习的变分无数据知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12618-12634. doi: 10.1109/TPAMI.2023.3271626. Epub 2023 Sep 5.