Suppr超能文献

过参数化神经网络实现了联想记忆。

Overparameterized neural networks implement associative memory.

机构信息

Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139.

Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA 02139.

出版信息

Proc Natl Acad Sci U S A. 2020 Nov 3;117(44):27162-27170. doi: 10.1073/pnas.2005013117. Epub 2020 Oct 16.

Abstract

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.

摘要

确定机器学习和神经科学交叉领域中数据记忆和检索的计算机制是一个长期存在的问题。我们的主要发现是,使用标准优化方法训练的标准过参数化深度神经网络为实值数据实现了这样的机制。我们提供了经验证据,证明 1)过参数化自动编码器将训练样本存储为吸引子,因此迭代学习的映射会导致样本恢复,并且 2)相同的机制允许对示例序列进行编码,并作为比自动编码更有效的记忆机制。从理论上讲,我们证明了当在单个示例上进行训练时,自动编码器将示例存储为吸引子。最后,通过将序列编码器视为映射的组合,我们证明了序列编码比自动编码提供了更有效的记忆机制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2287/7959487/90fcc78ecdbe/pnas.2005013117fig01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验