• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

渐进式学习:一种用于连续学习的深度学习框架。

Progressive learning: A deep learning framework for continual learning.

机构信息

School of Engineering, RMIT University, Melbourne VIC 3001, Australia.

School of Science, RMIT University, Melbourne VIC 3001, Australia.

出版信息

Neural Netw. 2020 Aug;128:345-357. doi: 10.1016/j.neunet.2020.05.011. Epub 2020 May 18.

DOI:10.1016/j.neunet.2020.05.011
PMID:32470799
Abstract

Continual learning is the ability of a learning system to solve new tasks by utilizing previously acquired knowledge from learning and performing prior tasks without having significant adverse effects on the acquired prior knowledge. Continual learning is key to advancing machine learning and artificial intelligence. Progressive learning is a deep learning framework for continual learning that comprises three procedures: curriculum, progression, and pruning. The curriculum procedure is used to actively select a task to learn from a set of candidate tasks. The progression procedure is used to grow the capacity of the model by adding new parameters that leverage parameters learned in prior tasks, while learning from data available for the new task at hand, without being susceptible to catastrophic forgetting. The pruning procedure is used to counteract the growth in the number of parameters as further tasks are learned, as well as to mitigate negative forward transfer, in which prior knowledge unrelated to the task at hand may interfere and worsen performance. Progressive learning is evaluated on a number of supervised classification tasks in the image recognition and speech recognition domains to demonstrate its advantages compared with baseline methods. It is shown that, when tasks are related, progressive learning leads to faster learning that converges to better generalization performance using a smaller number of dedicated parameters.

摘要

持续学习是指学习系统利用从学习和执行先前任务中获得的先前知识来解决新任务的能力,而不会对先前获得的知识产生重大不利影响。持续学习是推动机器学习和人工智能发展的关键。渐进式学习是一种持续学习的深度学习框架,包括三个过程:课程、进展和修剪。课程过程用于从一组候选任务中主动选择要学习的任务。进展过程用于通过添加新的参数来增加模型的容量,这些新的参数利用了在先前任务中学到的参数,同时从当前手头的新任务可用的数据中学习,而不会受到灾难性遗忘的影响。修剪过程用于抵消随着进一步学习任务而增加的参数数量,以及减轻负向前转移,其中与当前任务无关的先验知识可能会干扰并降低性能。渐进式学习在图像识别和语音识别领域的多个监督分类任务上进行了评估,以展示其与基线方法相比的优势。结果表明,当任务相关时,渐进式学习可以更快地学习,使用较少的专用参数收敛到更好的泛化性能。

相似文献

1
Progressive learning: A deep learning framework for continual learning.渐进式学习:一种用于连续学习的深度学习框架。
Neural Netw. 2020 Aug;128:345-357. doi: 10.1016/j.neunet.2020.05.011. Epub 2020 May 18.
2
Self-Net: Lifelong Learning via Continual Self-Modeling.自我网络:通过持续自我建模实现终身学习。
Front Artif Intell. 2020 Apr 9;3:19. doi: 10.3389/frai.2020.00019. eCollection 2020.
3
Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation.通过对抗性神经修剪和突触巩固来克服长期灾难性遗忘。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4243-4256. doi: 10.1109/TNNLS.2021.3056201. Epub 2022 Aug 31.
4
Convolutional Neural Network With Developmental Memory for Continual Learning.具有发展记忆的卷积神经网络用于连续学习。
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2691-2705. doi: 10.1109/TNNLS.2020.3007548. Epub 2021 Jun 2.
5
Variational Data-Free Knowledge Distillation for Continual Learning.用于持续学习的变分无数据知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12618-12634. doi: 10.1109/TPAMI.2023.3271626. Epub 2023 Sep 5.
6
Encoding primitives generation policy learning for robotic arm to overcome catastrophic forgetting in sequential multi-tasks learning.生成策略学习用于机械臂的编码基元,以克服顺序多任务学习中的灾难性遗忘。
Neural Netw. 2020 Sep;129:163-173. doi: 10.1016/j.neunet.2020.06.003. Epub 2020 Jun 5.
7
Comparing continual task learning in minds and machines.比较心智和机器中的持续任务学习。
Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10313-E10322. doi: 10.1073/pnas.1800755115. Epub 2018 Oct 15.
8
Continual learning with attentive recurrent neural networks for temporal data classification.用于时态数据分类的基于注意力循环神经网络的持续学习
Neural Netw. 2023 Jan;158:171-187. doi: 10.1016/j.neunet.2022.10.031. Epub 2022 Nov 11.
9
On Sequential Bayesian Inference for Continual Learning.关于持续学习的序贯贝叶斯推理
Entropy (Basel). 2023 May 31;25(6):884. doi: 10.3390/e25060884.
10
Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning.对抗性特征对齐:避免增量任务终身学习中的灾难性遗忘
Neural Comput. 2019 Nov;31(11):2266-2291. doi: 10.1162/neco_a_01232. Epub 2019 Sep 16.

引用本文的文献

1
Accelerating Multi-Objective Optimization of Composite Structures Using Multi-Fidelity Surrogate Models and Curriculum Learning.使用多保真度代理模型和课程学习加速复合结构的多目标优化
Materials (Basel). 2025 Mar 26;18(7):1469. doi: 10.3390/ma18071469.
2
Generative Diffusion-Based Task Incremental Learning Method for Decoding Motor Imagery EEG.基于生成扩散的运动想象脑电信号解码任务增量学习方法
Brain Sci. 2025 Jan 21;15(2):98. doi: 10.3390/brainsci15020098.
3
An adaptive session-incremental broad learning system for continuous motor imagery EEG classification.
一种用于连续运动想象脑电信号分类的自适应会话增量式广义学习系统。
Med Biol Eng Comput. 2025 Apr;63(4):1059-1079. doi: 10.1007/s11517-024-03246-1. Epub 2024 Nov 29.
4
Progressive transfer learning for advancing machine learning-based reduced-order modeling.用于推进基于机器学习的降阶建模的渐进式迁移学习
Sci Rep. 2024 Jul 8;14(1):15731. doi: 10.1038/s41598-024-64778-y.
5
Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models.渐进式深度SSM:图像到形状深度模型的训练方法
Shape Med Imaging (2023). 2023 Oct;14350:157-172. doi: 10.1007/978-3-031-46914-5_13. Epub 2023 Oct 31.
6
Genetic dissection of mutual interference between two consecutive learning tasks in .在 中,对两个连续学习任务之间的相互干扰进行遗传剖析。
Elife. 2023 Mar 10;12:e83516. doi: 10.7554/eLife.83516.
7
NMDA Receptor-Arc Signaling Is Required for Memory Updating and Is Disrupted in Alzheimer's Disease.NMDA 受体-Arc 信号对于记忆更新是必需的,并且在阿尔茨海默病中被破坏。
Biol Psychiatry. 2023 Nov 1;94(9):706-720. doi: 10.1016/j.biopsych.2023.02.008. Epub 2023 Feb 14.