• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种具有认知控制的持续学习神经网络模型。

A Neural Network Model of Continual Learning with Cognitive Control.

作者信息

Russin Jacob, Zolfaghar Maryam, Park Seongmin A, Boorman Erie, O'Reilly Randall C

机构信息

Dept. of Psychology, UC Davis.

Center for Neuroscience, UC Davis.

出版信息

Cogsci. 2022 Jul;44:1064-1071.

PMID:37223441
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10205096/
Abstract

Neural networks struggle in continual learning settings from catastrophic forgetting: when trials are blocked, new learning can overwrite the learning from previous blocks. Humans learn effectively in these settings, in some cases even showing an advantage of blocking, suggesting the brain contains mechanisms to overcome this problem. Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked. We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal, implying a tradeoff between maintenance and the strength of control. Analyses of map-like representations learned by the networks provided additional insights into these mechanisms. Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.

摘要

神经网络在持续学习环境中会因灾难性遗忘而陷入困境

当试验被分块时,新的学习会覆盖之前块中的学习内容。人类在这些环境中能有效地学习,在某些情况下甚至表现出分块的优势,这表明大脑包含克服此问题的机制。在此,我们基于之前的工作表明,配备认知控制机制的神经网络在试验被分块时不会表现出灾难性遗忘。我们进一步表明,当控制信号中存在主动维持的偏差时,分块比交错排列具有优势,这意味着在维持和控制强度之间存在权衡。对网络学习的类似地图的表征进行分析,为这些机制提供了更多见解。我们的工作突出了认知控制在帮助神经网络进行持续学习方面的潜力,并为在人类中观察到的分块优势提供了解释。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/76db5c55807f/nihms-1851069-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/ffbb34fa962a/nihms-1851069-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/d4d4e526175d/nihms-1851069-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/eddd2977e284/nihms-1851069-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/aaa99f84d3d1/nihms-1851069-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/76db5c55807f/nihms-1851069-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/ffbb34fa962a/nihms-1851069-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/d4d4e526175d/nihms-1851069-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/eddd2977e284/nihms-1851069-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/aaa99f84d3d1/nihms-1851069-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c1ef/10205096/76db5c55807f/nihms-1851069-f0005.jpg

相似文献

1
A Neural Network Model of Continual Learning with Cognitive Control.一种具有认知控制的持续学习神经网络模型。
Cogsci. 2022 Jul;44:1064-1071.
2
Encoding primitives generation policy learning for robotic arm to overcome catastrophic forgetting in sequential multi-tasks learning.生成策略学习用于机械臂的编码基元,以克服顺序多任务学习中的灾难性遗忘。
Neural Netw. 2020 Sep;129:163-173. doi: 10.1016/j.neunet.2020.06.003. Epub 2020 Jun 5.
3
Continual learning with attentive recurrent neural networks for temporal data classification.用于时态数据分类的基于注意力循环神经网络的持续学习
Neural Netw. 2023 Jan;158:171-187. doi: 10.1016/j.neunet.2022.10.031. Epub 2022 Nov 11.
4
Comparing continual task learning in minds and machines.比较心智和机器中的持续任务学习。
Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10313-E10322. doi: 10.1073/pnas.1800755115. Epub 2018 Oct 15.
5
Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation.睡眠通过形成联合突触权重表示来防止尖峰神经网络中的灾难性遗忘。
PLoS Comput Biol. 2022 Nov 18;18(11):e1010628. doi: 10.1371/journal.pcbi.1010628. eCollection 2022 Nov.
6
Continual Learning Using Bayesian Neural Networks.贝叶斯神经网络的持续学习。
IEEE Trans Neural Netw Learn Syst. 2021 Sep;32(9):4243-4252. doi: 10.1109/TNNLS.2020.3017292. Epub 2021 Aug 31.
7
Adaptive Progressive Continual Learning.自适应递进持续学习。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6715-6728. doi: 10.1109/TPAMI.2021.3095064. Epub 2022 Sep 14.
8
Continual Learning for Activity Recognition.持续学习的活动识别。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:2416-2420. doi: 10.1109/EMBC48229.2022.9871690.
9
Continual Learning Objective for Analyzing Complex Knowledge Representations.持续学习目标分析复杂知识表示。
Sensors (Basel). 2022 Feb 21;22(4):1667. doi: 10.3390/s22041667.
10
Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies.通过联想学习减少灾难性遗忘:果蝇带来的启示
Neural Comput. 2023 Oct 10;35(11):1797-1819. doi: 10.1162/neco_a_01615.

引用本文的文献

1
Development and validation of predictive models for diabetic retinopathy using machine learning.使用机器学习开发和验证糖尿病视网膜病变预测模型
PLoS One. 2025 Feb 24;20(2):e0318226. doi: 10.1371/journal.pone.0318226. eCollection 2025.
2
Rapid context inference in a thalamocortical model using recurrent neural networks.使用递归神经网络进行丘脑皮质模型中的快速上下文推断。
Nat Commun. 2024 Sep 27;15(1):8275. doi: 10.1038/s41467-024-52289-3.
3
Reconciling shared versus context-specific information in a neural network model of latent causes.

本文引用的文献

1
Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.利用赫布式上下文门控和指数衰减任务信号对人类进行连续学习建模。
PLoS Comput Biol. 2023 Jan 19;19(1):e1010808. doi: 10.1371/journal.pcbi.1010808. eCollection 2023 Jan.
2
Orthogonal representations for robust context-dependent task performance in brains and neural networks.大脑和神经网络中鲁棒上下文相关任务性能的正交表示。
Neuron. 2022 Apr 6;110(7):1258-1270.e11. doi: 10.1016/j.neuron.2022.01.005. Epub 2022 Jan 31.
3
Complementary Structure-Learning Neural Networks for Relational Reasoning.
在潜在因果关系的神经网络模型中协调共享信息和特定上下文信息。
Sci Rep. 2024 Jul 22;14(1):16782. doi: 10.1038/s41598-024-64272-5.
4
The dynamic interplay between in-context and in-weight learning in humans and neural networks.人类与神经网络中情境学习和权重学习之间的动态相互作用。
ArXiv. 2025 Apr 25:arXiv:2402.08674v4.
5
A recurrent neural network model of prefrontal brain activity during a working memory task.前额叶脑活动在工作记忆任务中的递归神经网络模型。
PLoS Comput Biol. 2023 Oct 18;19(10):e1011555. doi: 10.1371/journal.pcbi.1011555. eCollection 2023 Oct.
6
Continual task learning in natural and artificial agents.自然和人工代理中的持续任务学习。
Trends Neurosci. 2023 Mar;46(3):199-210. doi: 10.1016/j.tins.2022.12.006. Epub 2023 Jan 20.
7
Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.利用赫布式上下文门控和指数衰减任务信号对人类进行连续学习建模。
PLoS Comput Biol. 2023 Jan 19;19(1):e1010808. doi: 10.1371/journal.pcbi.1010808. eCollection 2023 Jan.
用于关系推理的互补结构学习神经网络。
Cogsci. 2021 Jul;2021:1560-1566.
4
Inferences on a multidimensional social hierarchy use a grid-like code.多维社会层级的推断使用网格状代码。
Nat Neurosci. 2021 Sep;24(9):1292-1301. doi: 10.1038/s41593-021-00916-3. Epub 2021 Aug 31.
5
Embracing Change: Continual Learning in Deep Neural Networks.拥抱变化:深度神经网络中的持续学习。
Trends Cogn Sci. 2020 Dec;24(12):1028-1040. doi: 10.1016/j.tics.2020.09.004. Epub 2020 Nov 3.
6
A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex.前额叶皮层中通过门控实现转移和节省的自适应终身学习的建模框架。
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29872-29882. doi: 10.1073/pnas.2009591117. Epub 2020 Nov 5.
7
Map Making: Constructing, Combining, and Inferring on Abstract Cognitive Maps.制图:抽象认知图的构建、组合和推断。
Neuron. 2020 Sep 23;107(6):1226-1238.e8. doi: 10.1016/j.neuron.2020.06.030. Epub 2020 Jul 22.
8
How Sequential Interactive Processing Within Frontostriatal Loops Supports a Continuum of Habitual to Controlled Processing.额纹状体环路中的序列交互处理如何支持从习惯性到控制性处理的连续过程。
Front Psychol. 2020 Mar 10;11:380. doi: 10.3389/fpsyg.2020.00380. eCollection 2020.
9
Reinforcement Learning, Fast and Slow.强化学习:快与慢。
Trends Cogn Sci. 2019 May;23(5):408-422. doi: 10.1016/j.tics.2019.02.006. Epub 2019 Apr 16.
10
Comparing continual task learning in minds and machines.比较心智和机器中的持续任务学习。
Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10313-E10322. doi: 10.1073/pnas.1800755115. Epub 2018 Oct 15.