• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

关于持续学习的序贯贝叶斯推理

On Sequential Bayesian Inference for Continual Learning.

作者信息

Kessler Samuel, Cobb Adam, Rudner Tim G J, Zohren Stefan, Roberts Stephen J

机构信息

Department of Engineering Science, University of Oxford, Oxford OX2 6ED, UK.

SRI International, Arlington, VA 22209, USA.

出版信息

Entropy (Basel). 2023 May 31;25(6):884. doi: 10.3390/e25060884.

DOI:10.3390/e25060884
PMID:37372228
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10297370/
Abstract

Sequential Bayesian inference can be used for to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks. We revisit sequential Bayesian inference and assess whether using the previous task's posterior as a prior for a new task can prevent catastrophic forgetting in Bayesian neural networks. Our first contribution is to perform sequential Bayesian inference using Hamiltonian Monte Carlo. We propagate the posterior as a prior for new tasks by approximating the posterior via fitting a density estimator on Hamiltonian Monte Carlo samples. We find that this approach fails to prevent catastrophic forgetting, demonstrating the difficulty in performing sequential Bayesian inference in neural networks. From there, we study simple analytical examples of sequential Bayesian inference and CL and highlight the issue of model misspecification, which can lead to sub-optimal continual learning performance despite exact inference. Furthermore, we discuss how task data imbalances can cause forgetting. From these limitations, we argue that we need probabilistic models of the continual learning generative process rather than relying on sequential Bayesian inference over Bayesian neural network weights. Our final contribution is to propose a simple baseline called , which is competitive with the best performing Bayesian continual learning methods on class incremental continual learning computer vision benchmarks.

摘要

序列贝叶斯推理可用于防止对过去任务的灾难性遗忘,并在学习新任务时提供信息丰富的先验。我们重新审视序列贝叶斯推理,并评估将前一个任务的后验用作新任务的先验是否可以防止贝叶斯神经网络中的灾难性遗忘。我们的第一个贡献是使用哈密顿蒙特卡罗方法进行序列贝叶斯推理。我们通过在哈密顿蒙特卡罗样本上拟合密度估计器来近似后验,从而将后验作为新任务的先验进行传播。我们发现这种方法无法防止灾难性遗忘,这表明在神经网络中执行序列贝叶斯推理存在困难。从那里,我们研究了序列贝叶斯推理和持续学习的简单分析示例,并强调了模型错误指定的问题,尽管进行了精确推理,但这可能导致次优的持续学习性能。此外,我们讨论了任务数据不平衡如何导致遗忘。基于这些局限性,我们认为我们需要持续学习生成过程的概率模型,而不是依赖于对贝叶斯神经网络权重的序列贝叶斯推理。我们的最后一个贡献是提出一个名为 的简单基线,它在类增量持续学习计算机视觉基准测试中与性能最佳的贝叶斯持续学习方法具有竞争力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/dbfd30408af2/entropy-25-00884-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/4c0de006d343/entropy-25-00884-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/dc8a2a227fa1/entropy-25-00884-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/0b976eb0adcd/entropy-25-00884-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5f5e2a7f54db/entropy-25-00884-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/d63511162e63/entropy-25-00884-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/0e955cc59932/entropy-25-00884-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/05f8c86d892e/entropy-25-00884-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5e396ae6bea0/entropy-25-00884-g0A8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/b793f5ccf6f3/entropy-25-00884-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5c6e3b8b3ddc/entropy-25-00884-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/324d9b05abd6/entropy-25-00884-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/faf130bf0618/entropy-25-00884-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/be78bfd87643/entropy-25-00884-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/dbfd30408af2/entropy-25-00884-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/4c0de006d343/entropy-25-00884-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/dc8a2a227fa1/entropy-25-00884-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/0b976eb0adcd/entropy-25-00884-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5f5e2a7f54db/entropy-25-00884-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/d63511162e63/entropy-25-00884-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/0e955cc59932/entropy-25-00884-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/05f8c86d892e/entropy-25-00884-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5e396ae6bea0/entropy-25-00884-g0A8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/b793f5ccf6f3/entropy-25-00884-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/5c6e3b8b3ddc/entropy-25-00884-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/324d9b05abd6/entropy-25-00884-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/faf130bf0618/entropy-25-00884-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/be78bfd87643/entropy-25-00884-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b40/10297370/dbfd30408af2/entropy-25-00884-g006.jpg

相似文献

1
On Sequential Bayesian Inference for Continual Learning.关于持续学习的序贯贝叶斯推理
Entropy (Basel). 2023 May 31;25(6):884. doi: 10.3390/e25060884.
2
Continual Learning Using Bayesian Neural Networks.贝叶斯神经网络的持续学习。
IEEE Trans Neural Netw Learn Syst. 2021 Sep;32(9):4243-4252. doi: 10.1109/TNNLS.2020.3017292. Epub 2021 Aug 31.
3
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition.通过生成式重放和开放集识别实现统一概率深度持续学习
J Imaging. 2022 Mar 31;8(4):93. doi: 10.3390/jimaging8040093.
4
Continual learning with attentive recurrent neural networks for temporal data classification.用于时态数据分类的基于注意力循环神经网络的持续学习
Neural Netw. 2023 Jan;158:171-187. doi: 10.1016/j.neunet.2022.10.031. Epub 2022 Nov 11.
5
Self-Net: Lifelong Learning via Continual Self-Modeling.自我网络:通过持续自我建模实现终身学习。
Front Artif Intell. 2020 Apr 9;3:19. doi: 10.3389/frai.2020.00019. eCollection 2020.
6
Encoding primitives generation policy learning for robotic arm to overcome catastrophic forgetting in sequential multi-tasks learning.生成策略学习用于机械臂的编码基元,以克服顺序多任务学习中的灾难性遗忘。
Neural Netw. 2020 Sep;129:163-173. doi: 10.1016/j.neunet.2020.06.003. Epub 2020 Jun 5.
7
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.
8
Adaptive Progressive Continual Learning.自适应递进持续学习。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6715-6728. doi: 10.1109/TPAMI.2021.3095064. Epub 2022 Sep 14.
9
Efficient Perturbation Inference and Expandable Network for continual learning.
Neural Netw. 2023 Feb;159:97-106. doi: 10.1016/j.neunet.2022.10.030. Epub 2022 Nov 7.
10
Continual learning with invertible generative models.具有可逆生成模型的持续学习。
Neural Netw. 2023 Jul;164:606-616. doi: 10.1016/j.neunet.2023.05.020. Epub 2023 May 19.

引用本文的文献

1
Layer wise Scaled Gaussian Priors for Markov Chain Monte Carlo Sampled deep Bayesian neural networks.用于马尔可夫链蒙特卡罗采样深度贝叶斯神经网络的逐层缩放高斯先验。
Front Artif Intell. 2025 Apr 25;8:1444891. doi: 10.3389/frai.2025.1444891. eCollection 2025.

本文引用的文献

1
Three types of incremental learning.三种增量学习类型。
Nat Mach Intell. 2022;4(12):1185-1197. doi: 10.1038/s42256-022-00568-3. Epub 2022 Dec 5.
2
Task-Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates.使用定点更新的在线变分贝叶斯进行与任务无关的持续学习。
Neural Comput. 2021 Oct 12;33(11):3139-3177. doi: 10.1162/neco_a_01430.
3
A Continual Learning Survey: Defying Forgetting in Classification Tasks.持续学习调查:在分类任务中对抗遗忘
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3366-3385. doi: 10.1109/TPAMI.2021.3057446. Epub 2022 Jun 3.
4
Continual Learning Through Synaptic Intelligence.通过突触智能进行持续学习。
Proc Mach Learn Res. 2017;70:3987-3995.
5
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
6
Catastrophic forgetting in connectionist networks.联结主义网络中的灾难性遗忘。
Trends Cogn Sci. 1999 Apr;3(4):128-135. doi: 10.1016/s1364-6613(99)01294-2.