Suppr超能文献

谐振器网络,2:与基于优化的方法相比的分解性能和容量。

Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.

机构信息

Redwood Center for Theoretical Neuroscience and Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, U.S.A.

Redwood Center for Theoretical Neuroscience and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Intel Laboratories, Neuromorphic Computing Lab, San Francisco, CA 94111, U.S.A.

出版信息

Neural Comput. 2020 Dec;32(12):2332-2388. doi: 10.1162/neco_a_01329. Epub 2020 Oct 20.

Abstract

We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.

摘要

我们为谐振器网络(一种新的循环神经网络)开发了理论基础,这种网络由 Frady、Kent、Olshausen 和 Sommer(2020 年)引入,是本期的一篇配套文章,旨在解决向量符号体系结构中出现的高维向量因式分解问题。对于由离散的高维向量的 Hadamard 积形成的复合向量,谐振器网络可以有效地将复合向量分解为这些因子。我们将谐振器网络的性能与基于优化的方法(包括交替最小二乘法和几种基于梯度的算法)进行了比较,结果表明谐振器网络在几个重要方面具有优势。这种优势是通过利用非线性动力学和叠加搜索相结合实现的,通过这种方式,从所有可能的解决方案的加权叠加中形成正确解决方案的估计。虽然替代方法也在叠加中进行搜索,但谐振器网络的动力学使它们能够在探索解空间和利用局部信息来推动网络接近可能的解决方案之间取得更有效的平衡。谐振器网络不能保证收敛,但在特定的范围内,它们几乎总是收敛的。作为对全局收敛保证的放松,谐振器网络在寻找因子分解方面的效果比所有考虑的替代方法都要好得多。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验