• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用定点更新的在线变分贝叶斯进行与任务无关的持续学习。

Task-Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates.

作者信息

Zeno Chen, Golan Itay, Hoffer Elad, Soudry Daniel

机构信息

Department of Electrical Engineering, Technion, Israel Institute of Technology, Haifa 3299993, Israel

Habana-Labs, Caesarea 3079821, Israel

出版信息

Neural Comput. 2021 Oct 12;33(11):3139-3177. doi: 10.1162/neco_a_01430.

DOI:10.1162/neco_a_01430
PMID:34474486
Abstract

Catastrophic forgetting is the notorious vulnerability of neural networks to the changes in the data distribution during learning. This phenomenon has long been considered a major obstacle for using learning agents in realistic continual learning settings. A large body of continual learning research assumes that task boundaries are known during training. However, only a few works consider scenarios in which task boundaries are unknown or not well defined: task-agnostic scenarios. The optimal Bayesian solution for this requires an intractable online Bayes update to the weights posterior. We aim to approximate the online Bayes update as accurately as possible. To do so, we derive novel fixed-point equations for the online variational Bayes optimization problem for multivariate gaussian parametric distributions. By iterating the posterior through these fixed-point equations, we obtain an algorithm (FOO-VB) for continual learning that can handle nonstationary data distribution using a fixed architecture and without using external memory (i.e., without access to previous data). We demonstrate that our method (FOO-VB) outperforms existing methods in task-agnostic scenarios. FOO-VB Pytorch implementation is available at https://github.com/chenzeno/FOO-VB.

摘要

灾难性遗忘是神经网络在学习过程中对数据分布变化的一种广为人知的脆弱性。长期以来,这种现象一直被认为是在现实的持续学习环境中使用学习智能体的主要障碍。大量的持续学习研究假设在训练期间任务边界是已知的。然而,只有少数工作考虑任务边界未知或定义不明确的场景:即与任务无关的场景。对此,最优的贝叶斯解决方案需要对权重后验进行难以处理的在线贝叶斯更新。我们的目标是尽可能准确地逼近在线贝叶斯更新。为此,我们针对多元高斯参数分布的在线变分贝叶斯优化问题推导了新的不动点方程。通过这些不动点方程迭代后验,我们得到了一种用于持续学习的算法(FOO-VB),该算法可以使用固定架构且无需外部记忆(即无需访问先前数据)来处理非平稳数据分布。我们证明了我们的方法(FOO-VB)在与任务无关的场景中优于现有方法。FOO-VB的Pytorch实现可在https://github.com/chenzeno/FOO-VB获取。

相似文献

1
Task-Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates.使用定点更新的在线变分贝叶斯进行与任务无关的持续学习。
Neural Comput. 2021 Oct 12;33(11):3139-3177. doi: 10.1162/neco_a_01430.
2
Continual Learning Using Bayesian Neural Networks.贝叶斯神经网络的持续学习。
IEEE Trans Neural Netw Learn Syst. 2021 Sep;32(9):4243-4252. doi: 10.1109/TNNLS.2020.3017292. Epub 2021 Aug 31.
3
Tf-GCZSL: Task-free generalized continual zero-shot learning.无任务广义持续零样本学习(Tf-GCZSL)。
Neural Netw. 2022 Nov;155:487-497. doi: 10.1016/j.neunet.2022.08.034. Epub 2022 Sep 6.
4
Improving transparency and representational generalizability through parallel continual learning.通过并行持续学习提高透明度和代表性泛化能力。
Neural Netw. 2023 Apr;161:449-465. doi: 10.1016/j.neunet.2023.02.007. Epub 2023 Feb 10.
5
GC: Generalizable Continual Classification of Medical Images.GC:医学图像的可推广连续分类。
IEEE Trans Med Imaging. 2024 Nov;43(11):3767-3779. doi: 10.1109/TMI.2024.3398533. Epub 2024 Nov 4.
6
Return of the normal distribution: Flexible deep continual learning with variational auto-encoders.正态分布的回归:基于变分自编码器的灵活深度持续学习
Neural Netw. 2022 Oct;154:397-412. doi: 10.1016/j.neunet.2022.07.016. Epub 2022 Jul 21.
7
VLAD: Task-agnostic VAE-based lifelong anomaly detection.VLAD:基于任务无关 VAE 的终身异常检测。
Neural Netw. 2023 Aug;165:248-273. doi: 10.1016/j.neunet.2023.05.032. Epub 2023 May 27.
8
Variational Data-Free Knowledge Distillation for Continual Learning.用于持续学习的变分无数据知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12618-12634. doi: 10.1109/TPAMI.2023.3271626. Epub 2023 Sep 5.
9
On Sequential Bayesian Inference for Continual Learning.关于持续学习的序贯贝叶斯推理
Entropy (Basel). 2023 May 31;25(6):884. doi: 10.3390/e25060884.
10
Continual learning with attentive recurrent neural networks for temporal data classification.用于时态数据分类的基于注意力循环神经网络的持续学习
Neural Netw. 2023 Jan;158:171-187. doi: 10.1016/j.neunet.2022.10.031. Epub 2022 Nov 11.

引用本文的文献

1
Continual deep reinforcement learning with task-agnostic policy distillation.基于任务无关策略蒸馏的持续深度强化学习
Sci Rep. 2024 Dec 30;14(1):31661. doi: 10.1038/s41598-024-80774-8.
2
On Sequential Bayesian Inference for Continual Learning.关于持续学习的序贯贝叶斯推理
Entropy (Basel). 2023 May 31;25(6):884. doi: 10.3390/e25060884.
3
Three types of incremental learning.三种增量学习类型。
Nat Mach Intell. 2022;4(12):1185-1197. doi: 10.1038/s42256-022-00568-3. Epub 2022 Dec 5.
4
Presynaptic stochasticity improves energy efficiency and helps alleviate the stability-plasticity dilemma.突触前随机性可提高能量效率,并有助于缓解稳定性-可塑性困境。
Elife. 2021 Oct 18;10:e69884. doi: 10.7554/eLife.69884.