Suppr超能文献

记忆中新信息的整合:从互补学习系统视角的新见解。

Integration of new information in memory: new insights from a complementary learning systems perspective.

机构信息

Department of Psychology, Stanford University, Stanford, CA 94305, USA.

University of California, Irvine, CA 92697, USA.

出版信息

Philos Trans R Soc Lond B Biol Sci. 2020 May 25;375(1799):20190637. doi: 10.1098/rstb.2019.0637. Epub 2020 Apr 6.

Abstract

According to complementary learning systems theory, integrating new memories into the neocortex of the brain without interfering with what is already known depends on a gradual learning process, interleaving new items with previously learned items. However, empirical studies show that information consistent with prior knowledge can sometimes be integrated very quickly. We use artificial neural networks with properties like those we attribute to the neocortex to develop an understanding of the role of consistency with prior knowledge in putatively neocortex-like learning systems, providing new insights into when integration will be fast or slow and how integration might be made more efficient when the items to be learned are hierarchically structured. The work relies on deep linear networks that capture the qualitative aspects of the learning dynamics of the more complex nonlinear networks used in previous work. The time course of learning in these networks can be linked to the hierarchical structure in the training data, captured mathematically as a set of dimensions that correspond to the branches in the hierarchy. In this context, a new item to be learned can be characterized as having aspects that project onto previously known dimensions, and others that require adding a new branch/dimension. The projection onto the known dimensions can be learned rapidly without interleaving, but learning the new dimension requires gradual interleaved learning. When a new item only overlaps with items within one branch of a hierarchy, interleaving can focus on the previously known items within this branch, resulting in faster integration with less interleaving overall. The discussion considers how the brain might exploit these facts to make learning more efficient and highlights predictions about what aspects of new information might be hard or easy to learn. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.

摘要

根据互补学习系统理论,将新的记忆整合到大脑的新皮层中而不干扰已有的知识,取决于一个渐进的学习过程,即将新的项目与以前学过的项目交织在一起。然而,实证研究表明,与先验知识一致的信息有时可以非常快速地整合。我们使用具有类似于新皮层属性的人工神经网络,来理解在假定的类脑学习系统中,与先验知识的一致性在信息整合中的作用,为我们提供了新的见解,即何时信息整合会更快或更慢,以及当要学习的项目具有层次结构时,如何使整合更有效。这项工作依赖于深度线性网络,这些网络可以捕捉到之前使用的更复杂非线性网络的学习动态的定性方面。这些网络中的学习过程可以与训练数据中的层次结构联系起来,在数学上可以表示为一组与层次结构分支相对应的维度。在这种情况下,一个要学习的新项目可以被描述为具有与已知维度相投影的方面,以及其他需要添加新分支/维度的方面。与已知维度的投影可以在没有交织的情况下快速学习,但学习新维度需要逐渐交织学习。当一个新项目只与层次结构的一个分支中的项目重叠时,交织可以集中在这个分支中的已知项目上,从而以更少的交织更快地进行整体整合。讨论考虑了大脑如何利用这些事实来提高学习效率,并强调了关于新信息的哪些方面可能难以或易于学习的预测。本文是主题为“记忆再激活:回放过去、现在和未来的事件”的 Theo Murphy 会议特刊的一部分。

相似文献

5
Complementary learning systems.互补学习系统
Cogn Sci. 2014 Aug;38(6):1229-48. doi: 10.1111/j.1551-6709.2011.01214.x. Epub 2011 Dec 5.
9
Rapid consolidation of new knowledge in adulthood via fast mapping.通过快速映射在成年期迅速巩固新知识。
Trends Cogn Sci. 2015 Sep;19(9):486-8. doi: 10.1016/j.tics.2015.06.001. Epub 2015 Jun 29.

引用本文的文献

6
9
The memory systems of the human brain and generative artificial intelligence.人类大脑的记忆系统与生成式人工智能。
Heliyon. 2024 May 24;10(11):e31965. doi: 10.1016/j.heliyon.2024.e31965. eCollection 2024 Jun 15.

本文引用的文献

1
A mathematical theory of semantic development in deep neural networks.一种深度神经网络中语义发展的数学理论。
Proc Natl Acad Sci U S A. 2019 Jun 4;116(23):11537-11546. doi: 10.1073/pnas.1820226116. Epub 2019 May 17.
5
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
7
The place of modeling in cognitive science.建模在认知科学中的地位。
Top Cogn Sci. 2009 Jan;1(1):11-38. doi: 10.1111/j.1756-8765.2008.01003.x.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验