Suppr超能文献

关于在超维计算中分离长期记忆和短期记忆

On separating long- and short-term memories in hyperdimensional computing.

作者信息

Teeters Jeffrey L, Kleyko Denis, Kanerva Pentti, Olshausen Bruno A

机构信息

Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States.

Intelligent Systems Lab, Research Institutes of Sweden, Kista, Sweden.

出版信息

Front Neurosci. 2023 Jan 9;16:867568. doi: 10.3389/fnins.2022.867568. eCollection 2022.

Abstract

Operations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors: the keys are bound to their values with component-wise multiplication, and the key-value pairs are combined into a single superposition vector with component-wise addition. The superposition vector is, thus, a memory which can then be queried for the value of any of the keys, but the result of the query is approximate. The exact vector is retrieved from a codebook (a.k.a. item memory), which contains vectors defined in the system. To perform these operations, the item memory vectors and the superposition vector must be the same width. Increasing the capacity of the memory requires increasing the width of the superposition and item memory vectors. In this article, we demonstrate that in a regime where many (e.g., 1,000 or more) key-value pairs are stored, an associative memory which maps key vectors to value vectors requires less memory and less computing to obtain the same reliability of storage as a superposition vector. These advantages are obtained because the number of storage locations in an associate memory can be increased without increasing the width of the vectors in the item memory. An associative memory would not replace a superposition vector as a medium of storage, but could augment it, because data recalled from an associative memory could be used in algorithms that use a superposition vector. This would be analogous to how human working memory (which stores about seven items) uses information recalled from long-term memory (which is much larger than the working memory). We demonstrate the advantages of an associative memory experimentally using the storage of large finite-state automata, which could model the storage and recall of state-dependent behavior by brains.

摘要

对高维、固定宽度向量进行的操作可用于将多个向量中的信息分布到具有相同宽度的单个向量上。例如,一组键值对可以通过相应键向量和值向量的乘法与加法编码到单个向量中:键通过按分量乘法与它们的值绑定,并且键值对通过按分量加法组合成单个叠加向量。因此,该叠加向量是一种存储器,随后可以查询其中任何一个键的值,但查询结果是近似值。精确向量是从码本(也称为项目存储器)中检索的,该码本包含系统中定义的向量。要执行这些操作,项目存储器向量和叠加向量必须具有相同的宽度。增加存储器的容量需要增加叠加向量和项目存储器向量的宽度。在本文中,我们证明,在存储许多(例如1000个或更多)键值对的情况下,将键向量映射到值向量的关联存储器与叠加向量相比,在获得相同存储可靠性时需要更少的存储器和更少的计算量。之所以能获得这些优势,是因为关联存储器中的存储位置数量可以增加,而无需增加项目存储器中向量的宽度。关联存储器不会取代叠加向量作为存储介质,但可以对其进行扩充,因为从关联存储器中召回的数据可用于使用叠加向量的算法中。这类似于人类工作记忆(存储大约七个项目)如何使用从长期记忆(比工作记忆大得多)中召回的信息。我们通过存储大型有限状态自动机进行实验,证明了关联存储器的优势,大型有限状态自动机可对大脑中状态依赖行为的存储和召回进行建模。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24a0/9869149/c47cfb63bbff/fnins-16-867568-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验