• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CeCR:用于在线类增量持续学习的交叉熵对比重放。

CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.

机构信息

School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China.

出版信息

Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.

DOI:10.1016/j.neunet.2024.106163
PMID:38430638
Abstract

Aiming at the realization of learning continually from an online data stream, replay-based methods have shown superior potential. The main challenge of replay-based methods is the selection of representative samples which are stored in the buffer and replayed. In this paper, we propose the Cross-entropy Contrastive Replay (CeCR) method in the online class-incremental setting. First, we present the Class-focused Memory Retrieval method that proceeds the class-level sampling without replacement. Second, we put forward the class-mean approximation memory update method that selectively replaces the mistakenly classified training samples with samples of current input batch. In addition, the Cross-entropy Contrastive Loss is proposed to implement the model training with obtaining more solid knowledge to achieve effective learning. Experiments show that the CeCR method has comparable or improved performance in two benchmark datasets in comparison with the state-of-the-art methods.

摘要

针对从在线数据流中持续学习的实现,基于重放的方法已经显示出了很大的潜力。基于重放的方法的主要挑战是选择在缓冲区中存储并重放的有代表性的样本。在本文中,我们在在线类增量设置中提出了交叉熵对比重放(CeCR)方法。首先,我们提出了类聚焦记忆检索方法,该方法在没有替换的情况下进行类级别的抽样。其次,我们提出了类均值近似记忆更新方法,该方法选择性地用当前输入批次的样本替换错误分类的训练样本。此外,还提出了交叉熵对比损失来实现模型训练,以获得更坚实的知识,从而实现有效的学习。实验表明,CeCR 方法在与最新方法的两个基准数据集的比较中具有可比或改进的性能。

相似文献

1
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
2
Online Continual Learning in Acoustic Scene Classification: An Empirical Study.声学场景分类中的在线持续学习:一项实证研究。
Sensors (Basel). 2023 Aug 3;23(15):6893. doi: 10.3390/s23156893.
3
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.
4
HPCR: Holistic Proxy-Based Contrastive Replay for Online Continual Learning.HPCR:用于在线持续学习的基于整体代理的对比回放
IEEE Trans Neural Netw Learn Syst. 2025 Jan 13;PP. doi: 10.1109/TNNLS.2025.3526442.
5
Exploring multi-granularity balance strategy for class incremental learning via three-way granular computing.基于三支粒计算探索类增量学习的多粒度平衡策略
Brain Inform. 2025 Mar 17;12(1):7. doi: 10.1186/s40708-025-00255-0.
6
Imbalance Mitigation for Continual Learning via Knowledge Decoupling and Dual Enhanced Contrastive Learning.通过知识解耦和双重增强对比学习实现持续学习中的不平衡缓解
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3450-3463. doi: 10.1109/TNNLS.2023.3347477. Epub 2025 Feb 6.
7
Brain-inspired replay for continual learning with artificial neural networks.基于脑启发的人工神经网络连续学习回放。
Nat Commun. 2020 Aug 13;11(1):4069. doi: 10.1038/s41467-020-17866-2.
8
Online Active Continual Learning for Robotic Lifelong Object Recognition.用于机器人终身目标识别的在线主动持续学习
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17790-17804. doi: 10.1109/TNNLS.2023.3308900. Epub 2024 Dec 2.
9
scEVOLVE: cell-type incremental annotation without forgetting for single-cell RNA-seq data.scEVOLVE:单细胞 RNA-seq 数据的细胞类型增量注释而不忘却。
Brief Bioinform. 2024 Jan 22;25(2). doi: 10.1093/bib/bbae039.
10
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.

引用本文的文献

1
Generative Diffusion-Based Task Incremental Learning Method for Decoding Motor Imagery EEG.基于生成扩散的运动想象脑电信号解码任务增量学习方法
Brain Sci. 2025 Jan 21;15(2):98. doi: 10.3390/brainsci15020098.