• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

过参数化神经网络实现了联想记忆。

Overparameterized neural networks implement associative memory.

机构信息

Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139.

Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA 02139.

出版信息

Proc Natl Acad Sci U S A. 2020 Nov 3;117(44):27162-27170. doi: 10.1073/pnas.2005013117. Epub 2020 Oct 16.

DOI:10.1073/pnas.2005013117
PMID:33067397
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7959487/
Abstract

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.

摘要

确定机器学习和神经科学交叉领域中数据记忆和检索的计算机制是一个长期存在的问题。我们的主要发现是,使用标准优化方法训练的标准过参数化深度神经网络为实值数据实现了这样的机制。我们提供了经验证据,证明 1)过参数化自动编码器将训练样本存储为吸引子,因此迭代学习的映射会导致样本恢复,并且 2)相同的机制允许对示例序列进行编码,并作为比自动编码更有效的记忆机制。从理论上讲,我们证明了当在单个示例上进行训练时,自动编码器将示例存储为吸引子。最后,通过将序列编码器视为映射的组合,我们证明了序列编码比自动编码提供了更有效的记忆机制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/61641b265255/pnas.2005013117fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/90fcc78ecdbe/pnas.2005013117fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/4709d66838b0/pnas.2005013117fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/7d57f31cf9eb/pnas.2005013117fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/2924452aa640/pnas.2005013117fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/61641b265255/pnas.2005013117fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/90fcc78ecdbe/pnas.2005013117fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/4709d66838b0/pnas.2005013117fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/7d57f31cf9eb/pnas.2005013117fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/2924452aa640/pnas.2005013117fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/61641b265255/pnas.2005013117fig05.jpg

相似文献

1
Overparameterized neural networks implement associative memory.过参数化神经网络实现了联想记忆。
Proc Natl Acad Sci U S A. 2020 Nov 3;117(44):27162-27170. doi: 10.1073/pnas.2005013117. Epub 2020 Oct 16.
2
Collective computational intelligence in biology - Emergence of memory in somatic tissues.生物学中的集体计算智能——体细胞组织中记忆的出现。
Biosystems. 2023 Jan;223:104816. doi: 10.1016/j.biosystems.2022.104816. Epub 2022 Nov 25.
3
Energy minimization in the nonlinear dynamic recurrent associative memory.非线性动态递归联想记忆中的能量最小化
Neural Netw. 2008 Sep;21(7):1041-4. doi: 10.1016/j.neunet.2008.06.005. Epub 2008 Jun 25.
4
Compositional memory in attractor neural networks with one-step learning.吸引子神经网络的一步学习的组合记忆。
Neural Netw. 2021 Jun;138:78-97. doi: 10.1016/j.neunet.2021.01.031. Epub 2021 Feb 11.
5
Network capacity analysis for latent attractor computation.用于潜在吸引子计算的网络容量分析。
Network. 2003 May;14(2):273-302.
6
Memory dynamics in attractor networks with saliency weights.吸引子网络中的记忆动力学与显著权重。
Neural Comput. 2010 Jul;22(7):1899-926. doi: 10.1162/neco.2010.07-09-1050.
7
Eigen value based loss function for training attractors in iterated autoencoders.基于特征值的损失函数,用于在迭代自编码器中训练吸引子。
Neural Netw. 2023 Apr;161:575-588. doi: 10.1016/j.neunet.2023.02.003. Epub 2023 Feb 9.
8
Learning sequence attractors in recurrent networks with hidden neurons.具有隐藏神经元的递归网络中的学习序列吸引子。
Neural Netw. 2024 Oct;178:106466. doi: 10.1016/j.neunet.2024.106466. Epub 2024 Jun 22.
9
Stable memory with unstable synapses.稳定的记忆与不稳定的突触。
Nat Commun. 2019 Sep 30;10(1):4441. doi: 10.1038/s41467-019-12306-2.
10
SSTE: Syllable-Specific Temporal Encoding to FORCE-learn audio sequences with an associative memory approach.SSTE:基于联想记忆方法的音节特定时间编码,用于强制学习音频序列。
Neural Netw. 2024 Sep;177:106368. doi: 10.1016/j.neunet.2024.106368. Epub 2024 May 7.

引用本文的文献

1
An enhanced transcription factor repressilator that buffers stochasticity and entrains to an erratic external circadian signal.一种增强型转录因子阻遏振荡器,可缓冲随机性并与不稳定的外部昼夜节律信号同步。
Front Syst Biol. 2023 Dec 13;3:1276734. doi: 10.3389/fsysb.2023.1276734. eCollection 2023.
2
Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.预测编码模型在不同层次的表征层级上检测新颖性。
Neural Comput. 2025 Jul 17;37(8):1373-1408. doi: 10.1162/neco_a_01769.
3
Integrating representation learning, permutation, and optimization to detect lineage-related gene expression patterns.

本文引用的文献

1
Reconciling modern machine-learning practice and the classical bias-variance trade-off.调和现代机器学习实践与经典偏差-方差权衡。
Proc Natl Acad Sci U S A. 2019 Aug 6;116(32):15849-15854. doi: 10.1073/pnas.1903070116. Epub 2019 Jul 24.
2
A fast learning algorithm for deep belief nets.一种用于深度信念网络的快速学习算法。
Neural Comput. 2006 Jul;18(7):1527-54. doi: 10.1162/neco.2006.18.7.1527.
3
Neural networks and physical systems with emergent collective computational abilities.具有涌现集体计算能力的神经网络与物理系统。
整合表征学习、排列和优化以检测谱系相关基因表达模式。
Nat Commun. 2025 Jan 27;16(1):1062. doi: 10.1038/s41467-025-56388-7.
4
Episodic and associative memory from spatial scaffolds in the hippocampus.海马体中基于空间框架的情景记忆和关联记忆。
Nature. 2025 Feb;638(8051):739-751. doi: 10.1038/s41586-024-08392-y. Epub 2025 Jan 15.
5
Reservoir-computing based associative memory and itinerancy for complex dynamical attractors.基于储层计算的复杂动态吸引子的关联记忆与巡回
Nat Commun. 2024 Jun 6;15(1):4840. doi: 10.1038/s41467-024-49190-4.
6
Simple and complex cells revisited: toward a selectivity-invariance model of object recognition.再探简单细胞与复杂细胞:迈向物体识别的选择性 - 不变性模型
Front Comput Neurosci. 2023 Oct 13;17:1282828. doi: 10.3389/fncom.2023.1282828. eCollection 2023.
7
Modern synergetic neural network for imbalanced small data classification.用于不平衡小数据分类的现代协同神经网络。
Sci Rep. 2023 Sep 21;13(1):15669. doi: 10.1038/s41598-023-42689-8.
8
Transfer Learning with Kernel Methods.基于核方法的迁移学习
Nat Commun. 2023 Sep 9;14(1):5570. doi: 10.1038/s41467-023-41215-8.
9
Latent generative landscapes as maps of functional diversity in protein sequence space.潜在生成景观作为蛋白质序列空间中功能多样性的图谱。
Nat Commun. 2023 Apr 19;14(1):2222. doi: 10.1038/s41467-023-37958-z.
10
Recurrent predictive coding models for associative memory employing covariance learning.采用协方差学习的联想记忆反复预测编码模型。
PLoS Comput Biol. 2023 Apr 14;19(4):e1010719. doi: 10.1371/journal.pcbi.1010719. eCollection 2023 Apr.
Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554-8. doi: 10.1073/pnas.79.8.2554.
4
'Unlearning' has a stabilizing effect in collective memories.“忘却”对集体记忆具有稳定作用。
Nature. 1983;304(5922):158-9. doi: 10.1038/304158a0.