Frady E Paxon, Kleyko Denis, Sommer Friedrich T
Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea SE-971 87, Sweden
Neural Comput. 2018 Jun;30(6):1449-1513. doi: 10.1162/neco_a_01084. Epub 2018 Apr 13.
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
为了适应神经计算的结构化方法,我们提出了一类递归神经网络,用于对符号序列或模拟数据向量进行索引和存储。这些具有随机输入权重和正交递归权重的网络实现了先前在向量符号架构(VSA)中描述的编码原理,并利用了储层计算的特性。一般来说,储层计算中的存储是有损的,串扰噪声限制了检索精度和信息容量。本文提出了一种优化此类网络内存性能的新理论,并与模拟实验进行了比较。该理论描述了模拟数据的线性读出以及如VSA模型中所提出的符号数据的胜者全得误差校正读出。我们发现,文献中各种不同的VSA模型具有通用的性能特性,优于先前分析所预测的性能。此外,我们提出了在读出中采用统计最优维纳滤波器的新型VSA模型,其具有更高的信息容量,特别是在存储模拟数据方面。我们提出的理论也适用于内存缓冲区,即具有逐渐遗忘功能的网络,其可以在无限数据流上运行而不会出现内存溢出。有趣的是,我们发现,如果遗忘时间常数匹配,不同的遗忘机制,如衰减递归权重或神经非线性,会产生非常相似的行为。当这些模型的遗忘时间常数针对给定的噪声条件和网络大小进行优化时,它们表现出广泛的容量。这些结果有助于设计用于在线处理数据流的新型VSA模型。