• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

学过不忘。

Learning without Forgetting.

作者信息

Li Zhizhong, Hoiem Derek

出版信息

IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.

DOI:10.1109/TPAMI.2017.2773081
PMID:29990101
Abstract

When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.

摘要

在构建统一视觉系统或逐步为系统添加新功能时,通常的假设是所有任务的训练数据始终可用。然而,随着任务数量的增加,存储此类数据并在其上进行重新训练变得不可行。当我们向卷积神经网络(CNN)添加新功能,但现有功能的训练数据不可用时,就会出现一个新问题。我们提出了“不忘学习”方法,该方法仅使用新任务数据来训练网络,同时保留原始功能。与常用的特征提取和微调适应技术相比,我们的方法表现良好,并且与使用我们假设不可用的原始任务数据的多任务学习表现相似。一个更令人惊讶的发现是,“不忘学习”可能能够用类似的新旧任务数据集替代微调,以提高新任务性能。

相似文献

1
Learning without Forgetting.学过不忘。
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
2
Tree-CNN: A hierarchical Deep Convolutional Neural Network for incremental learning.树卷积神经网络:一种用于增量学习的层次化深度卷积神经网络。
Neural Netw. 2020 Jan;121:148-160. doi: 10.1016/j.neunet.2019.09.010. Epub 2019 Sep 19.
3
Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning.对抗性特征对齐:避免增量任务终身学习中的灾难性遗忘
Neural Comput. 2019 Nov;31(11):2266-2291. doi: 10.1162/neco_a_01232. Epub 2019 Sep 16.
4
Task Sensitive Feature Exploration and Learning for Multitask Graph Classification.面向多任务图分类的任务敏感特征探索和学习。
IEEE Trans Cybern. 2017 Mar;47(3):744-758. doi: 10.1109/TCYB.2016.2526058. Epub 2016 Mar 10.
5
Lifelong Metric Learning.终身度量学习
IEEE Trans Cybern. 2019 Aug;49(8):3168-3179. doi: 10.1109/TCYB.2018.2841046. Epub 2018 Jun 21.
6
Continual Learning for Activity Recognition.持续学习的活动识别。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:2416-2420. doi: 10.1109/EMBC48229.2022.9871690.
7
Multitask-Guided Deep Clustering With Boundary Adaptation.具有边界自适应的多任务引导深度聚类
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6089-6102. doi: 10.1109/TNNLS.2023.3307126. Epub 2024 May 2.
8
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.基于示例卷积神经网络的判别式无监督特征学习。
IEEE Trans Pattern Anal Mach Intell. 2016 Sep;38(9):1734-47. doi: 10.1109/TPAMI.2015.2496141. Epub 2015 Oct 29.
9
Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks.基于生成对抗网络的无监督领域自适应人脸表情识别。
Comput Intell Neurosci. 2018 Jul 9;2018:7208794. doi: 10.1155/2018/7208794. eCollection 2018.
10
Encoding primitives generation policy learning for robotic arm to overcome catastrophic forgetting in sequential multi-tasks learning.生成策略学习用于机械臂的编码基元,以克服顺序多任务学习中的灾难性遗忘。
Neural Netw. 2020 Sep;129:163-173. doi: 10.1016/j.neunet.2020.06.003. Epub 2020 Jun 5.

引用本文的文献

1
Train your robot in AR: insights and challenges for humans and robots in continual teaching and learning.在增强现实中训练你的机器人:持续教学与学习中人类和机器人面临的见解与挑战。
Front Robot AI. 2025 Aug 13;12:1605652. doi: 10.3389/frobt.2025.1605652. eCollection 2025.
2
Trajectory Tracking Controller for Quadrotor by Continual Reinforcement Learning in Wind-Disturbed Environment.风力干扰环境下基于持续强化学习的四旋翼轨迹跟踪控制器
Sensors (Basel). 2025 Aug 8;25(16):4895. doi: 10.3390/s25164895.
3
A New Incremental Learning Method Based on Rainbow Memory for Fault Diagnosis of AUV.
一种基于彩虹记忆的AUV故障诊断增量学习新方法。
Sensors (Basel). 2025 Jul 22;25(15):4539. doi: 10.3390/s25154539.
4
Dual-Stage Clean-Sample Selection for Incremental Noisy Label Learning.用于增量噪声标签学习的双阶段干净样本选择
Bioengineering (Basel). 2025 Jul 8;12(7):743. doi: 10.3390/bioengineering12070743.
5
Interleaved Replay of Novel and Familiar Memory Traces During Slow-Wave Sleep Prevents Catastrophic Forgetting.慢波睡眠期间新记忆痕迹与熟悉记忆痕迹的交错回放可防止灾难性遗忘。
bioRxiv. 2025 Jun 29:2025.06.25.661579. doi: 10.1101/2025.06.25.661579.
6
Domain-incremental white blood cell classification with privacy-aware continual learning.具有隐私感知持续学习的域增量白细胞分类
Sci Rep. 2025 Jul 15;15(1):25468. doi: 10.1038/s41598-025-08024-z.
7
Cross paradigm fusion of federated and continual learning on multilayer perceptron mixer architecture for incremental thoracic infection diagnosis.基于多层感知器混合器架构的联邦学习与持续学习的跨范式融合用于增量性胸腔感染诊断
Sci Rep. 2025 Jul 8;15(1):24449. doi: 10.1038/s41598-025-06077-8.
8
Mitigating catastrophic forgetting in Multiple sclerosis lesion segmentation using elastic weight consolidation.使用弹性权重巩固减轻多发性硬化症病变分割中的灾难性遗忘
Neuroimage Clin. 2025;46:103795. doi: 10.1016/j.nicl.2025.103795. Epub 2025 May 20.
9
An efficient fine tuning strategy of segment anything model for polyp segmentation.一种用于息肉分割的高效的“分割一切”模型微调策略。
Sci Rep. 2025 Apr 23;15(1):14088. doi: 10.1038/s41598-025-97802-w.
10
Leveraging data mining, active learning, and domain adaptation for efficient discovery of advanced oxygen evolution electrocatalysts.利用数据挖掘、主动学习和域适应来高效发现先进的析氧电催化剂。
Sci Adv. 2025 Apr 4;11(14):eadr9038. doi: 10.1126/sciadv.adr9038.