Suppr超能文献

有原则的神经形态储层计算

Principled neuromorphic reservoir computing.

作者信息

Kleyko Denis, Kymn Christopher J, Thomas Anthony, Olshausen Bruno A, Sommer Friedrich T, Frady E Paxon

机构信息

Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.

Intelligent Systems Lab, RISE Research Institutes of Sweden, Kista, Sweden.

出版信息

Nat Commun. 2025 Jan 14;16(1):640. doi: 10.1038/s41467-025-55832-y.

Abstract

Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit-the reservoir-can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that simultaneously buffers input signals and expands them into nonlinear features has been challenged. A representation scheme in which memory buffer and expansion into higher-order polynomial features can be configured separately has been shown to significantly outperform traditional reservoir computing in prediction of multivariate time-series. Here we propose a configurable neuromorphic representation scheme that provides competitive performance on prediction, but with significantly better scaling properties than directly materializing higher-order features as in prior work. Our approach combines the use of randomized representations from traditional reservoir computing with mathematical principles for approximating polynomial kernels via such representations. While the memory buffer can be realized with standard reservoir networks, computing higher-order features requires networks of 'Sigma-Pi' neurons, i.e., neurons that enable both summation as well as multiplication of inputs. Finally, we provide an implementation of the memory buffer and Sigma-Pi networks on Loihi 2, an existing neuromorphic hardware platform.

摘要

储层计算提出了一个有趣的观点,即非线性递归神经回路——储层——可以对时空输入信号进行编码,从而实现诸如分类或回归等任务的高效执行方式。然而,最近,同时缓冲输入信号并将其扩展为非线性特征的整体式储层网络的观点受到了挑战。一种可以分别配置内存缓冲区和扩展为高阶多项式特征的表示方案,在多变量时间序列预测方面已被证明明显优于传统的储层计算。在此,我们提出一种可配置的神经形态表示方案,该方案在预测方面具有竞争力,但其缩放特性比先前工作中直接实现高阶特征要好得多。我们的方法将传统储层计算中的随机表示与通过此类表示近似多项式核的数学原理相结合。虽然内存缓冲区可以用标准储层网络实现,但计算高阶特征需要“西格玛-派”神经元网络,即能够对输入进行求和以及乘法运算的神经元。最后,我们在现有的神经形态硬件平台Loihi 2上实现了内存缓冲区和西格玛-派网络。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验