• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于广义学习系统的具有快速更新和高保持性的类增量学习方法

Class-Incremental Learning Method With Fast Update and High Retainability Based on Broad Learning System.

作者信息

Du Jie, Liu Peng, Vong Chi-Man, Chen Chuangquan, Wang Tianfu, Chen C L Philip

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11332-11345. doi: 10.1109/TNNLS.2023.3259016. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3259016
PMID:37030863
Abstract

Machine learning aims to generate a predictive model from a training dataset of a fixed number of known classes. However, many real-world applications (such as health monitoring and elderly care) are data streams in which new data arrive continually in a short time. Such new data may even belong to previously unknown classes. Hence, class-incremental learning (CIL) is necessary, which incrementally and rapidly updates an existing model with the data of new classes while retaining the existing knowledge of old classes. However, most current CIL methods are designed based on deep models that require a computationally expensive training and update process. In addition, deep learning based CIL (DCIL) methods typically employ stochastic gradient descent (SGD) as an optimizer that forgets the old knowledge to a certain extent. In this article, a broad learning system-based CIL (BLS-CIL) method with fast update and high retainability of old class knowledge is proposed. Traditional BLS is a fast and effective shallow neural network, but it does not work well on CIL tasks. However, our proposed BLS-CIL can overcome these issues and provide the following: 1) high accuracy due to our novel class-correlation loss function that considers the correlations between old and new classes; 2) significantly short training/update time due to the newly derived closed-form solution for our class-correlation loss without iterative optimization; and 3) high retainability of old class knowledge due to our newly derived recursive update rule for CIL (RULL) that does not replay the exemplars of all old classes, as contrasted to the exemplars-replaying methods with the SGD optimizer. The proposed BLS-CIL has been evaluated over 12 real-world datasets, including seven tabular/numerical datasets and six image datasets, and the compared methods include one shallow network and seven classical or state-of-the-art DCIL methods. Experimental results show that our BIL-CIL can significantly improve the classification performance over a shallow network by a large margin (8.80%-48.42%). It also achieves comparable or even higher accuracy than DCIL methods, but greatly reduces the training time from hours to minutes and the update time from minutes to seconds.

摘要

机器学习旨在从固定数量已知类别的训练数据集中生成预测模型。然而,许多现实世界的应用(如健康监测和老年护理)是数据流,新数据在短时间内不断到达。此类新数据甚至可能属于先前未知的类别。因此,类增量学习(CIL)是必要的,它能利用新类别的数据对现有模型进行增量且快速的更新,同时保留旧类别的现有知识。然而,当前大多数CIL方法是基于深度模型设计的,这些模型需要计算成本高昂的训练和更新过程。此外,基于深度学习的CIL(DCIL)方法通常采用随机梯度下降(SGD)作为优化器,这在一定程度上会遗忘旧知识。在本文中,提出了一种基于广泛学习系统的CIL(BLS-CIL)方法,该方法具有快速更新和对旧类知识的高保留性。传统的BLS是一种快速有效的浅层神经网络,但在CIL任务上表现不佳。然而,我们提出的BLS-CIL可以克服这些问题,并具有以下优点:1)由于我们新颖的类相关损失函数考虑了新旧类之间的相关性,因此具有高精度;2)由于我们为类相关损失新推导的闭式解无需迭代优化,因此训练/更新时间显著缩短;3)由于我们为CIL新推导的递归更新规则(RULL),与使用SGD优化器的样本重放方法不同,它不会重放所有旧类别的样本,因此对旧类知识具有高保留性。所提出的BLS-CIL已在12个真实世界数据集上进行了评估,包括7个表格/数值数据集和6个图像数据集,比较的方法包括一个浅层网络和7种经典或最新的DCIL方法。实验结果表明,我们的BIL-CIL与浅层网络相比,可以显著提高分类性能(提高幅度为8.80%-48.42%)。它还实现了与DCIL方法相当甚至更高的准确率,但将训练时间从数小时大幅缩短至数分钟,更新时间从数分钟缩短至数秒。

相似文献

1
Class-Incremental Learning Method With Fast Update and High Retainability Based on Broad Learning System.基于广义学习系统的具有快速更新和高保持性的类增量学习方法
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11332-11345. doi: 10.1109/TNNLS.2023.3259016. Epub 2024 Aug 5.
2
Balanced Destruction-Reconstruction Dynamics for Memory-Replay Class Incremental Learning.
IEEE Trans Image Process. 2024;33:4966-4981. doi: 10.1109/TIP.2024.3451932. Epub 2024 Sep 11.
3
Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.多粒度知识蒸馏和原型一致性正则化的类增量学习。
Neural Netw. 2023 Jul;164:617-630. doi: 10.1016/j.neunet.2023.05.006. Epub 2023 May 11.
4
CCSI: Continual Class-Specific Impression for data-free class incremental learning.CCSI:用于无数据类增量学习的持续类特定印象。
Med Image Anal. 2024 Oct;97:103239. doi: 10.1016/j.media.2024.103239. Epub 2024 Jun 15.
5
A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.一种用于深度学习的新型自适应三次拟牛顿优化器,在 COVID-19 检测和 COVID-19 肺部感染、肝脏肿瘤以及视盘/杯分割等医学图像分析任务中得到验证。
Med Phys. 2023 Mar;50(3):1528-1538. doi: 10.1002/mp.15969. Epub 2022 Oct 6.
6
An Adaptive Deep Metric Learning Loss Function for Class-Imbalance Learning via Intraclass Diversity and Interclass Distillation.一种通过类内多样性和类间蒸馏进行类别不平衡学习的自适应深度度量学习损失函数
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15372-15386. doi: 10.1109/TNNLS.2023.3286484. Epub 2024 Oct 29.
7
Deep Class-Incremental Learning From Decentralized Data.基于分散数据的深度类别增量学习
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):7190-7203. doi: 10.1109/TNNLS.2022.3214573. Epub 2024 May 2.
8
Class-Incremental Learning: A Survey.类增量学习:一项综述。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9851-9873. doi: 10.1109/TPAMI.2024.3429383. Epub 2024 Nov 6.
9
CKDF: Cascaded Knowledge Distillation Framework for Robust Incremental Learning.CKDF:用于稳健增量学习的级联知识蒸馏框架
IEEE Trans Image Process. 2022;31:3825-3837. doi: 10.1109/TIP.2022.3176130. Epub 2022 Jun 2.
10
An Incremental-Self-Training-Guided Semi-Supervised Broad Learning System.
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7196-7210. doi: 10.1109/TNNLS.2024.3392583. Epub 2025 Apr 4.