Suppr超能文献

递归神经网络连接的初始化与自组织优化

Initialization and self-organized optimization of recurrent neural network connectivity.

作者信息

Boedecker Joschka, Obst Oliver, Mayer N Michael, Asada Minoru

出版信息

HFSP J. 2009 Oct;3(5):340-9. doi: 10.2976/1.3240502. Epub 2009 Oct 26.

Abstract

Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.

摘要

储层计算(RC)是循环神经网络领域的一种新范式。RC中的网络具有稀疏且随机连接的固定隐藏层,并且仅对输出连接进行训练。最近,RC网络作为一种用于研究和解释新皮质柱中计算的通用神经微电路数学模型受到了越来越多的关注。然而,应用于特定任务时,其固定的随机连接性会导致性能出现显著差异。已知很少有针对特定问题的优化程序,这对于工程应用很重要,而且对于理解生物中的网络如何被塑造以最佳地适应其环境的要求也很重要。我们研究了一种使用置换矩阵的通用网络初始化方法,并基于内在可塑性(IP)推导了一种新的无监督学习规则。基于IP的学习仅使用局部学习,其目的是以自组织的方式提高网络性能。使用三个不同的基准,我们表明具有用于储层连接性的置换矩阵的网络比其他方法具有更持久的记忆,但也能够执行高度非线性映射。我们还表明,基于Sigmoid传递函数的IP在可实现的输出分布方面存在局限性。

相似文献

1
Initialization and self-organized optimization of recurrent neural network connectivity.
HFSP J. 2009 Oct;3(5):340-9. doi: 10.2976/1.3240502. Epub 2009 Oct 26.
2
Composing recurrent spiking neural networks using locally-recurrent motifs and risk-mitigating architectural optimization.
Front Neurosci. 2024 Jun 20;18:1412559. doi: 10.3389/fnins.2024.1412559. eCollection 2024.
3
Contextual Integration in Cortical and Convolutional Neural Networks.
Front Comput Neurosci. 2020 Apr 23;14:31. doi: 10.3389/fncom.2020.00031. eCollection 2020.
4
Biologically plausible deep learning - But how far can we go with shallow networks?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
5
Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks.
Front Comput Neurosci. 2021 Mar 4;15:543872. doi: 10.3389/fncom.2021.543872. eCollection 2021.
6
Computational capabilities of random automata networks for reservoir computing.
Phys Rev E Stat Nonlin Soft Matter Phys. 2013 Apr;87(4):042808. doi: 10.1103/PhysRevE.87.042808. Epub 2013 Apr 16.
7
Chaotic time series prediction using phase space reconstruction based conceptor network.
Cogn Neurodyn. 2020 Dec;14(6):849-857. doi: 10.1007/s11571-020-09612-7. Epub 2020 Jul 23.
8
Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons.
Neural Comput. 2010 May;22(5):1272-311. doi: 10.1162/neco.2009.01-09-947.
10
An unsupervised parameter learning model for RVFL neural network.
Neural Netw. 2019 Apr;112:85-97. doi: 10.1016/j.neunet.2019.01.007. Epub 2019 Jan 28.

引用本文的文献

1
Local Homeostatic Regulation of the Spectral Radius of Echo-State Networks.
Front Comput Neurosci. 2021 Feb 24;15:587721. doi: 10.3389/fncom.2021.587721. eCollection 2021.
2
Dynamic Neural Fields with Intrinsic Plasticity.
Front Comput Neurosci. 2017 Aug 31;11:74. doi: 10.3389/fncom.2017.00074. eCollection 2017.
3
Information processing in echo state networks at the edge of chaos.
Theory Biosci. 2012 Sep;131(3):205-13. doi: 10.1007/s12064-011-0146-8. Epub 2011 Dec 7.
4
Guided self-organization.
HFSP J. 2009 Oct;3(5):287-9. doi: 10.1080/19552068.2009.9635816. Epub 2009 Oct 7.

本文引用的文献

1
Learning long-term dependencies with gradient descent is difficult.
IEEE Trans Neural Netw. 1994;5(2):157-66. doi: 10.1109/72.279181.
2
Fading memory and time series prediction in recurrent networks with different forms of plasticity.
Neural Netw. 2007 Apr;20(3):312-22. doi: 10.1016/j.neunet.2007.04.020. Epub 2007 May 3.
3
An experimental unification of reservoir computing methods.
Neural Netw. 2007 Apr;20(3):391-403. doi: 10.1016/j.neunet.2007.04.003. Epub 2007 Apr 29.
4
Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning.
Neural Netw. 2007 Apr;20(3):353-64. doi: 10.1016/j.neunet.2007.04.011. Epub 2007 May 3.
5
Synergies between intrinsic and synaptic plasticity mechanisms.
Neural Comput. 2007 Apr;19(4):885-909. doi: 10.1162/neco.2007.19.4.885.
6
Training recurrent networks by Evolino.
Neural Comput. 2007 Mar;19(3):757-79. doi: 10.1162/neco.2007.19.3.757.
7
Analysis and design of echo state networks.
Neural Comput. 2007 Jan;19(1):111-38. doi: 10.1162/neco.2007.19.1.111.
8
Real-time computation at the edge of chaos in recurrent neural networks.
Neural Comput. 2004 Jul;16(7):1413-36. doi: 10.1162/089976604323057443.
9
Short-term memory in orthogonal neural networks.
Phys Rev Lett. 2004 Apr 9;92(14):148102. doi: 10.1103/PhysRevLett.92.148102.
10
Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication.
Science. 2004 Apr 2;304(5667):78-80. doi: 10.1126/science.1091277.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验