• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于三支粒计算探索类增量学习的多粒度平衡策略

Exploring multi-granularity balance strategy for class incremental learning via three-way granular computing.

作者信息

Xian Yan, Yu Hong, Wang Ye, Wang Guoyin

机构信息

Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, No.2 Chongwen Road, Chongqing, 400065, China.

National Center for Applied Mathematics in Chongqing, Chongqing Normal University, No. 37 Middle University Road, Chongqing, 401331, China.

出版信息

Brain Inform. 2025 Mar 17;12(1):7. doi: 10.1186/s40708-025-00255-0.

DOI:10.1186/s40708-025-00255-0
PMID:40095147
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11914578/
Abstract

Class incremental learning (CIL) is a specific scenario in incremental learning. It aims to continuously learn new classes from the data stream, which suffers from the challenge of catastrophic forgetting. Inspired by the human hippocampus, the CIL method for replaying episodic memory offers a promising solution. However, the limited buffer budget restricts the number of old class samples that can be stored, resulting in an imbalance between new and old class samples during each incremental learning stage. This imbalance adversely affects the mitigation of catastrophic forgetting. Therefore, we propose a novel CIL method based on multi-granularity balance strategy (MGBCIL), which is inspired by the three-way granular computing in human problem-solving. In order to mitigate the adverse effects of imbalances on catastrophic forgetting at fine-, medium-, and coarse-grained levels during training, MGBCIL introduces specific strategies across the batch, task, and decision stages. Specifically, a weighted cross-entropy loss function with a smoothing factor is proposed for batch processing. In the process of task updating and classification decision, contrastive learning with different anchor point settings is employed to promote local and global separation between new and old classes. Additionally, the knowledge distillation technology is used to preserve knowledge of the old classes. Experimental evaluations on CIFAR-10 and CIFAR-100 datasets show that MGBCIL outperforms other methods in most incremental settings. Specifically, when storing 3 exemplars on CIFAR-10 with Base2 Inc2 setting, the average accuracy is improved by up to 9.59% and the forgetting rate is reduced by up to 25.45%.

摘要

类别增量学习(CIL)是增量学习中的一种特定场景。它旨在从数据流中持续学习新类别,面临灾难性遗忘的挑战。受人类海马体启发,用于重放情景记忆的CIL方法提供了一个有前景的解决方案。然而,有限的缓冲区预算限制了可存储的旧类别样本数量,导致每个增量学习阶段新旧类别样本之间的不平衡。这种不平衡对减轻灾难性遗忘产生不利影响。因此,我们提出了一种基于多粒度平衡策略的新型CIL方法(MGBCIL),它受人类解决问题的三元粒度计算启发。为了在训练过程中减轻不平衡在细粒度、中粒度和粗粒度水平上对灾难性遗忘的不利影响,MGBCIL在批次、任务和决策阶段引入了特定策略。具体而言,针对批次处理提出了一种带有平滑因子的加权交叉熵损失函数。在任务更新和分类决策过程中,采用不同锚点设置的对比学习来促进新旧类别之间的局部和全局分离。此外,使用知识蒸馏技术来保留旧类别的知识。在CIFAR - 10和CIFAR - 100数据集上的实验评估表明,MGBCIL在大多数增量设置下优于其他方法。具体来说,在CIFAR - 10上使用Base2 Inc2设置存储3个样本时,平均准确率提高了高达9.59%,遗忘率降低了高达25.45%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/0c53eee147b1/40708_2025_255_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/c947676a1532/40708_2025_255_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/9bca50127a61/40708_2025_255_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/2aff96908028/40708_2025_255_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/308383700002/40708_2025_255_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/d33d288246c8/40708_2025_255_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/5e44d16a1abc/40708_2025_255_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/0c53eee147b1/40708_2025_255_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/c947676a1532/40708_2025_255_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/9bca50127a61/40708_2025_255_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/2aff96908028/40708_2025_255_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/308383700002/40708_2025_255_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/d33d288246c8/40708_2025_255_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/5e44d16a1abc/40708_2025_255_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d14/11914578/0c53eee147b1/40708_2025_255_Fig7_HTML.jpg

相似文献

1
Exploring multi-granularity balance strategy for class incremental learning via three-way granular computing.基于三支粒计算探索类增量学习的多粒度平衡策略
Brain Inform. 2025 Mar 17;12(1):7. doi: 10.1186/s40708-025-00255-0.
2
Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.多粒度知识蒸馏和原型一致性正则化的类增量学习。
Neural Netw. 2023 Jul;164:617-630. doi: 10.1016/j.neunet.2023.05.006. Epub 2023 May 11.
3
Imbalance Mitigation for Continual Learning via Knowledge Decoupling and Dual Enhanced Contrastive Learning.通过知识解耦和双重增强对比学习实现持续学习中的不平衡缓解
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3450-3463. doi: 10.1109/TNNLS.2023.3347477. Epub 2025 Feb 6.
4
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
5
LNet: Localized and Layered Reparameterization for incremental learning.LNet:用于增量学习的局部化和分层重参数化
Neural Netw. 2025 Aug;188:107420. doi: 10.1016/j.neunet.2025.107420. Epub 2025 Mar 24.
6
CL3: Generalization of Contrastive Loss for Lifelong Learning.CL3:用于终身学习的对比损失的泛化
J Imaging. 2023 Nov 23;9(12):259. doi: 10.3390/jimaging9120259.
7
Uncertainty-Aware Contrastive Distillation for Incremental Semantic Segmentation.用于增量语义分割的不确定性感知对比蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2567-2581. doi: 10.1109/TPAMI.2022.3163806. Epub 2023 Jan 6.
8
Incremental learning for an evolving stream of medical ultrasound images via counterfactual thinking.通过反事实思维对不断发展的医学超声图像流进行增量学习。
Comput Med Imaging Graph. 2023 Oct;109:102290. doi: 10.1016/j.compmedimag.2023.102290. Epub 2023 Aug 20.
9
Continual Learning by Contrastive Learning of Regularized Classes in Multivariate Gaussian Distributions.通过多变量高斯分布中正则化类别的对比学习进行持续学习
Int J Neural Syst. 2025 Jun;35(6):2550025. doi: 10.1142/S012906572550025X. Epub 2025 Apr 4.
10
WP-FSCIL: A Well-Prepared Few-shot Class-incremental Learning Framework for Pill Recognition.WP-FSCIL:一种用于药丸识别的精心准备的少样本类别增量学习框架。
IEEE J Biomed Health Inform. 2025 Mar 6;PP. doi: 10.1109/JBHI.2025.3548691.

本文引用的文献

1
Class-Incremental Learning: A Survey.类增量学习:一项综述。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9851-9873. doi: 10.1109/TPAMI.2024.3429383. Epub 2024 Nov 6.
2
A Comprehensive Survey of Continual Learning: Theory, Method and Application.持续学习的全面综述:理论、方法与应用
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5362-5383. doi: 10.1109/TPAMI.2024.3367329. Epub 2024 Jul 2.
3
Brain-inspired replay for continual learning with artificial neural networks.基于脑启发的人工神经网络连续学习回放。
Nat Commun. 2020 Aug 13;11(1):4069. doi: 10.1038/s41467-020-17866-2.
4
Continual Learning Through Synaptic Intelligence.通过突触智能进行持续学习。
Proc Mach Learn Res. 2017;70:3987-3995.
5
The hippocampal sharp wave-ripple in memory retrieval for immediate use and consolidation.海马体中的尖波涟漪在即时使用和巩固记忆检索中的作用。
Nat Rev Neurosci. 2018 Dec;19(12):744-757. doi: 10.1038/s41583-018-0077-1.
6
Learning without Forgetting.学过不忘。
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
7
Granular computing with multiple granular layers for brain big data processing.用于脑大数据处理的具有多个粒度层的粒计算
Brain Inform. 2014 Dec;1(1-4):1-10. doi: 10.1007/s40708-014-0001-z. Epub 2014 Sep 6.
8
Reactivation of hippocampal ensemble memories during sleep.睡眠期间海马体集合记忆的重新激活。
Science. 1994 Jul 29;265(5172):676-9. doi: 10.1126/science.8036517.