• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于相似权重交错学习的深度神经网络和大脑中的学习。

Learning in deep neural networks and brains with similarity-weighted interleaved learning.

机构信息

Department of Neurobiology and Behavior, University of California, Irvine, CA 92697.

Canadian Centre for Behavioural Neuroscience, The University of Lethbridge, Lethbridge, Alberta T1K 3M4, Canada.

出版信息

Proc Natl Acad Sci U S A. 2022 Jul 5;119(27):e2115229119. doi: 10.1073/pnas.2115229119. Epub 2022 Jun 27.

DOI:10.1073/pnas.2115229119
PMID:35759669
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9271163/
Abstract

Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new information rapidly with a similar accuracy level and minimal interference, while using a much smaller number of old items presented per epoch (fast and data-efficient). SWIL is shown to work with various standard classification datasets (Fashion-MNIST, CIFAR10, and CIFAR100), deep neural network architectures, and in sequential learning frameworks. We show that data efficiency and speedup in learning new items are increased roughly proportionally to the number of nonoverlapping classes stored in the network, which implies an enormous possible speedup in human brains, which encode a high number of separate categories. Finally, we propose a theoretical model of how SWIL might be implemented in the brain.

摘要

理解大脑如何在一生中学习仍然是一个长期存在的挑战。在人工神经网络 (ANNs) 中,过快地引入新信息会导致灾难性干扰,即先前获得的知识突然丢失。互补学习系统理论 (CLST) 表明,新记忆可以通过将新记忆与现有知识交织在一起,逐渐整合到新皮层中。然而,这种方法被假设为每次学习新内容时都需要交织所有现有的知识,这是不可行的,因为它既耗时又需要大量的数据。我们表明,深度非线性 ANNs 可以通过仅交织与新信息具有大量表示相似性的旧项目的子集来学习新信息。通过使用这种基于相似性加权的交织学习 (SWIL),ANNs 可以以相似的准确性水平和最小的干扰快速学习新信息,同时每个时期使用的旧项目数量更少(快速且高效)。SWIL 被证明可用于各种标准分类数据集(Fashion-MNIST、CIFAR10 和 CIFAR100)、深度神经网络架构以及顺序学习框架。我们表明,学习新项的数据效率和加速与网络中存储的非重叠类别的数量大致成正比,这意味着人类大脑可能会有巨大的加速潜力,因为人类大脑编码了大量独立的类别。最后,我们提出了一个关于 SWIL 如何在大脑中实现的理论模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/aed6240e8b4e/pnas.2115229119fig08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/32cf07328929/pnas.2115229119fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/1917fd652396/pnas.2115229119fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/365aee6a36ae/pnas.2115229119fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/6c315d93b660/pnas.2115229119fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/8fb83b2480f7/pnas.2115229119fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/0b1cfb269afd/pnas.2115229119fig06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/eaedc5f063f6/pnas.2115229119fig07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/aed6240e8b4e/pnas.2115229119fig08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/32cf07328929/pnas.2115229119fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/1917fd652396/pnas.2115229119fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/365aee6a36ae/pnas.2115229119fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/6c315d93b660/pnas.2115229119fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/8fb83b2480f7/pnas.2115229119fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/0b1cfb269afd/pnas.2115229119fig06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/eaedc5f063f6/pnas.2115229119fig07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6672/9271163/aed6240e8b4e/pnas.2115229119fig08.jpg

相似文献

1
Learning in deep neural networks and brains with similarity-weighted interleaved learning.基于相似权重交错学习的深度神经网络和大脑中的学习。
Proc Natl Acad Sci U S A. 2022 Jul 5;119(27):e2115229119. doi: 10.1073/pnas.2115229119. Epub 2022 Jun 27.
2
Integration of new information in memory: new insights from a complementary learning systems perspective.记忆中新信息的整合:从互补学习系统视角的新见解。
Philos Trans R Soc Lond B Biol Sci. 2020 May 25;375(1799):20190637. doi: 10.1098/rstb.2019.0637. Epub 2020 Apr 6.
3
Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory.将新的与图式一致的信息的快速新皮质学习纳入互补学习系统理论中。
J Exp Psychol Gen. 2013 Nov;142(4):1190-1210. doi: 10.1037/a0033812. Epub 2013 Aug 26.
4
Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation.通过对抗性神经修剪和突触巩固来克服长期灾难性遗忘。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4243-4256. doi: 10.1109/TNNLS.2021.3056201. Epub 2022 Aug 31.
5
DynMat, a network that can learn after learning.DynMat,一种具有学习后再学习能力的网络。
Neural Netw. 2019 Aug;116:88-100. doi: 10.1016/j.neunet.2019.04.005. Epub 2019 Apr 8.
6
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.
7
Energy efficient synaptic plasticity.节能型突触可塑性。
Elife. 2020 Feb 13;9:e50804. doi: 10.7554/eLife.50804.
8
Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation.睡眠通过形成联合突触权重表示来防止尖峰神经网络中的灾难性遗忘。
PLoS Comput Biol. 2022 Nov 18;18(11):e1010628. doi: 10.1371/journal.pcbi.1010628. eCollection 2022 Nov.
9
What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.智能体需要什么样的学习系统?更新后的补充学习系统理论。
Trends Cogn Sci. 2016 Jul;20(7):512-534. doi: 10.1016/j.tics.2016.05.004.
10
Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization.基于双记忆循环自组织的时空表征终身学习
Front Neurorobot. 2018 Nov 28;12:78. doi: 10.3389/fnbot.2018.00078. eCollection 2018.

引用本文的文献

1
Interleaved Replay of Novel and Familiar Memory Traces During Slow-Wave Sleep Prevents Catastrophic Forgetting.慢波睡眠期间新记忆痕迹与熟悉记忆痕迹的交错回放可防止灾难性遗忘。
bioRxiv. 2025 Jun 29:2025.06.25.661579. doi: 10.1101/2025.06.25.661579.
2
The Eyes are The Windows To The Soul: Pupillary Changes Reflect The Consolidation of New and Old Memories During Sleep.眼睛是心灵的窗户:瞳孔变化反映睡眠期间新旧记忆的巩固。
Neurosci Bull. 2025 May 11. doi: 10.1007/s12264-025-01410-7.
3
Window to the soul: Pupil dynamics unveil the consolidation of recent and remote memories.

本文引用的文献

1
Brain-inspired replay for continual learning with artificial neural networks.基于脑启发的人工神经网络连续学习回放。
Nat Commun. 2020 Aug 13;11(1):4069. doi: 10.1038/s41467-020-17866-2.
2
Integration of new information in memory: new insights from a complementary learning systems perspective.记忆中新信息的整合:从互补学习系统视角的新见解。
Philos Trans R Soc Lond B Biol Sci. 2020 May 25;375(1799):20190637. doi: 10.1098/rstb.2019.0637. Epub 2020 Apr 6.
3
Continual Learning Through Synaptic Intelligence.通过突触智能进行持续学习。
心灵之窗:瞳孔动态揭示近期和远期记忆的巩固。
Zool Res. 2025 Mar 18;46(2):261-262. doi: 10.24272/j.issn.2095-8137.2025.036.
4
Sleep microstructure organizes memory replay.睡眠微观结构组织记忆重演。
Nature. 2025 Jan;637(8048):1161-1169. doi: 10.1038/s41586-024-08340-w. Epub 2025 Jan 1.
5
Reconciling shared versus context-specific information in a neural network model of latent causes.在潜在因果关系的神经网络模型中协调共享信息和特定上下文信息。
Sci Rep. 2024 Jul 22;14(1):16782. doi: 10.1038/s41598-024-64272-5.
6
Bridging Neuroscience and AI: Environmental Enrichment as a model for forward knowledge transfer in continual learning.架起神经科学与人工智能的桥梁:环境富集作为持续学习中前瞻性知识转移的模型。
ArXiv. 2025 Jan 23:arXiv:2405.07295v3.
7
A simple illustration of interleaved learning using Kalman filter for linear least squares.一个使用卡尔曼滤波器进行线性最小二乘的交错学习的简单示例。
Results Appl Math. 2023 Nov;20:None. doi: 10.1016/j.rinam.2023.100409.
8
A cardiologist-like computer-aided interpretation framework to improve arrhythmia diagnosis from imbalanced training datasets.一种类似心脏病专家的计算机辅助解读框架,用于从不平衡训练数据集中改进心律失常诊断。
Patterns (N Y). 2023 Jul 12;4(9):100795. doi: 10.1016/j.patter.2023.100795. eCollection 2023 Sep 8.
Proc Mach Learn Res. 2017;70:3987-3995.
4
A mathematical theory of semantic development in deep neural networks.一种深度神经网络中语义发展的数学理论。
Proc Natl Acad Sci U S A. 2019 Jun 4;116(23):11537-11546. doi: 10.1073/pnas.1820226116. Epub 2019 May 17.
5
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
6
Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning.海马体尖波涟漪:情景记忆和计划的认知生物标志物。
Hippocampus. 2015 Oct;25(10):1073-188. doi: 10.1002/hipo.22488.
7
Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory.将新的与图式一致的信息的快速新皮质学习纳入互补学习系统理论中。
J Exp Psychol Gen. 2013 Nov;142(4):1190-1210. doi: 10.1037/a0033812. Epub 2013 Aug 26.
8
How does the brain solve visual object recognition?大脑如何解决视觉物体识别问题?
Neuron. 2012 Feb 9;73(3):415-34. doi: 10.1016/j.neuron.2012.01.010.
9
Hippocampal-cortical interactions and the dynamics of memory trace reactivation.海马-皮质相互作用和记忆痕迹重激活的动力学。
Prog Brain Res. 2011;193:163-77. doi: 10.1016/B978-0-444-53839-0.00011-9.
10
Schema-dependent gene activation and memory encoding in neocortex.依赖于图式的新皮层中的基因激活和记忆编码。
Science. 2011 Aug 12;333(6044):891-5. doi: 10.1126/science.1205274. Epub 2011 Jul 7.