• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于学习特征兼容嵌入的类增量学习方法。

A class-incremental learning approach for learning feature-compatible embeddings.

机构信息

Guizhou University, School of Mechanical Engineering, Guiyang, 550025, Guizhou, China.

Guizhou University, School of Mechanical Engineering, Guiyang, 550025, Guizhou, China; Guizhou University, State Key Laboratory of Public Big Data Ministry of Education, Guiyang, 550025, Guizhou, China.

出版信息

Neural Netw. 2024 Dec;180:106685. doi: 10.1016/j.neunet.2024.106685. Epub 2024 Aug 31.

DOI:10.1016/j.neunet.2024.106685
PMID:39243512
Abstract

Humans have the ability to constantly learn new knowledge. However, for artificial intelligence, trying to continuously learn new knowledge usually results in catastrophic forgetting, the existing regularization-based and dynamic structure-based approaches have shown great potential for alleviating. Nevertheless, these approaches have certain limitations. They usually do not fully consider the problem of incompatible feature embeddings. Instead, they tend to focus only on the features of new or previous classes and fail to comprehensively consider the entire model. Therefore, we propose a two-stage learning paradigm to solve feature embedding incompatibility problems. Specifically, we retain the previous model and freeze all its parameters in the first stage while dynamically expanding a new module to alleviate feature embedding incompatibility questions. In the second stage, a fusion knowledge distillation approach is used to compress the redundant feature dimensions. Moreover, we propose weight pruning and consolidation approaches to improve the efficiency of the model. Our experimental results obtained on the CIFAR-100, ImageNet-100 and ImageNet-1000 benchmark datasets show that the proposed approaches achieve the best performance among all the compared approaches. For example, on the ImageNet-100 dataset, the maximal accuracy improvement is 5.08%. Code is available at https://github.com/ybyangjing/CIL-FCE.

摘要

人类具有不断学习新知识的能力。然而,对于人工智能来说,尝试不断学习新知识通常会导致灾难性遗忘,现有的基于正则化和动态结构的方法已经显示出了很大的缓解潜力。然而,这些方法存在一定的局限性。它们通常不能完全考虑不兼容特征嵌入的问题。相反,它们往往只关注新的或以前的类别的特征,而不能全面考虑整个模型。因此,我们提出了一种两阶段学习范式来解决特征嵌入不兼容问题。具体来说,我们在第一阶段保留之前的模型,并冻结其所有参数,同时动态扩展一个新的模块来缓解特征嵌入不兼容问题。在第二阶段,采用融合知识蒸馏方法来压缩冗余的特征维度。此外,我们提出了权重剪枝和整合方法来提高模型的效率。我们在 CIFAR-100、ImageNet-100 和 ImageNet-1000 基准数据集上的实验结果表明,所提出的方法在所有比较方法中取得了最佳性能。例如,在 ImageNet-100 数据集上,最大的准确率提高了 5.08%。代码可在 https://github.com/ybyangjing/CIL-FCE 上获得。

相似文献

1
A class-incremental learning approach for learning feature-compatible embeddings.一种用于学习特征兼容嵌入的类增量学习方法。
Neural Netw. 2024 Dec;180:106685. doi: 10.1016/j.neunet.2024.106685. Epub 2024 Aug 31.
2
Class-incremental learning with Balanced Embedding Discrimination Maximization.基于平衡嵌入判别最大化的增量式学习。
Neural Netw. 2024 Nov;179:106487. doi: 10.1016/j.neunet.2024.106487. Epub 2024 Jun 22.
3
Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.多粒度知识蒸馏和原型一致性正则化的类增量学习。
Neural Netw. 2023 Jul;164:617-630. doi: 10.1016/j.neunet.2023.05.006. Epub 2023 May 11.
4
Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting.记忆召回:一种针对灾难性遗忘的简单神经网络训练框架。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2010-2022. doi: 10.1109/TNNLS.2021.3099700. Epub 2022 May 2.
5
A comprehensive study of class incremental learning algorithms for visual tasks.面向视觉任务的类增量学习算法的综合研究。
Neural Netw. 2021 Mar;135:38-54. doi: 10.1016/j.neunet.2020.12.003. Epub 2020 Dec 8.
6
Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation.通过对抗性神经修剪和突触巩固来克服长期灾难性遗忘。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4243-4256. doi: 10.1109/TNNLS.2021.3056201. Epub 2022 Aug 31.
7
DyCR: A Dynamic Clustering and Recovering Network for Few-Shot Class-Incremental Learning.DyCR:一种用于少样本类别增量学习的动态聚类与恢复网络
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7116-7129. doi: 10.1109/TNNLS.2024.3394844. Epub 2025 Apr 4.
8
GCReID: Generalized continual person re-identification via meta learning and knowledge accumulation.GCReID:基于元学习和知识积累的广义连续行人再识别。
Neural Netw. 2024 Nov;179:106561. doi: 10.1016/j.neunet.2024.106561. Epub 2024 Jul 22.
9
Incremental Embedding Learning With Disentangled Representation Translation.基于解缠表示转换的增量嵌入学习
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3821-3833. doi: 10.1109/TNNLS.2022.3199816. Epub 2024 Feb 29.
10
Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks.基于卷积神经网络上的QR分解的在线数据增量学习
Sensors (Basel). 2023 Sep 27;23(19):8117. doi: 10.3390/s23198117.

引用本文的文献

1
Fostering life hope in urban green spaces through brief online mindfulness: findings from four studies with park visitors.通过简短的在线正念在城市绿地中培育生活希望:四项针对公园游客的研究结果
Front Psychol. 2025 Jul 23;16:1642533. doi: 10.3389/fpsyg.2025.1642533. eCollection 2025.