• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

微调与递归神经网络的稳定性。

Fine-tuning and the stability of recurrent neural networks.

机构信息

Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Canada.

出版信息

PLoS One. 2011;6(9):e22885. doi: 10.1371/journal.pone.0022885. Epub 2011 Sep 27.

DOI:10.1371/journal.pone.0022885
PMID:21980334
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3181247/
Abstract

A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

摘要

标准理论方法构建稳定、反复出现的模型网络的一个核心批评是,突触连接权重需要精细调整。这种批评是严厉的,因为已经表明,学习这些权重的规则在其生物学合理性方面存在各种局限性。因此,这些规则不太可能用于在体内持续微调网络。我们描述了一种能够以生物学上合理的方式调整突触权重的学习规则。我们在眼球运动整合器的上下文中展示和测试了这个规则,表明只需要已知的神经信号就可以调整权重。我们证明了该规则适当地解释了各种各样的实验结果,并且在几种扰动下具有鲁棒性。此外,我们表明该规则能够实现与整合器的递归模型中常用的线性最优权重相当或更好的稳定性。最后,我们讨论了如何将该规则推广到调整各种递归吸引子网络,例如在头部方向和路径整合系统中发现的网络,表明它可能用于调整各种稳定的神经网络。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/6226bab7e0b6/pone.0022885.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/a93a38d7fd44/pone.0022885.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/4ac34c9f43b7/pone.0022885.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/635c95bb4df4/pone.0022885.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/77d173fffcf2/pone.0022885.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/4b7d6285b43c/pone.0022885.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/95e55182ef0c/pone.0022885.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/16bcffdbad35/pone.0022885.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/2112847f104d/pone.0022885.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/5a91f44ebc4f/pone.0022885.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/8a8fce6d18a1/pone.0022885.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/43edbe7205cd/pone.0022885.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/6226bab7e0b6/pone.0022885.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/a93a38d7fd44/pone.0022885.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/4ac34c9f43b7/pone.0022885.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/635c95bb4df4/pone.0022885.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/77d173fffcf2/pone.0022885.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/4b7d6285b43c/pone.0022885.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/95e55182ef0c/pone.0022885.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/16bcffdbad35/pone.0022885.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/2112847f104d/pone.0022885.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/5a91f44ebc4f/pone.0022885.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/8a8fce6d18a1/pone.0022885.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/43edbe7205cd/pone.0022885.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e30c/3181247/6226bab7e0b6/pone.0022885.g012.jpg

相似文献

1
Fine-tuning and the stability of recurrent neural networks.微调与递归神经网络的稳定性。
PLoS One. 2011;6(9):e22885. doi: 10.1371/journal.pone.0022885. Epub 2011 Sep 27.
2
How the brain keeps the eyes still.大脑如何保持眼睛稳定。
Proc Natl Acad Sci U S A. 1996 Nov 12;93(23):13339-44. doi: 10.1073/pnas.93.23.13339.
3
Plasticity and tuning by visual feedback of the stability of a neural integrator.神经积分器稳定性的视觉反馈所导致的可塑性与调谐
Proc Natl Acad Sci U S A. 2004 May 18;101(20):7739-44. doi: 10.1073/pnas.0401970101. Epub 2004 May 10.
4
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
5
Gradient learning in spiking neural networks by dynamic perturbation of conductances.通过动态扰动电导实现脉冲神经网络中的梯度学习。
Phys Rev Lett. 2006 Jul 28;97(4):048104. doi: 10.1103/PhysRevLett.97.048104.
6
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.用于精确时间编码的脉冲神经网络中的监督学习。
PLoS One. 2016 Aug 17;11(8):e0161335. doi: 10.1371/journal.pone.0161335. eCollection 2016.
7
The oculomotor integrator: testing of a neural network model.动眼整合器:神经网络模型的测试
Exp Brain Res. 1997 Jan;113(1):57-74. doi: 10.1007/BF02454142.
8
A learning network model of the neural integrator of the oculomotor system.动眼神经系统神经整合器的学习网络模型。
Biol Cybern. 1991;64(6):447-54. doi: 10.1007/BF00202608.
9
Learning accurate path integration in ring attractor models of the head direction system.在头方向系统的环吸引模型中学习精确的路径整合。
Elife. 2022 Jun 20;11:e69841. doi: 10.7554/eLife.69841.
10
Learning rule of homeostatic synaptic scaling: presynaptic dependent or not.学习自平衡突触缩放规则:依赖于突触前还是不依赖于突触前。
Neural Comput. 2011 Dec;23(12):3145-61. doi: 10.1162/NECO_a_00210. Epub 2011 Sep 15.

引用本文的文献

1
Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation.李雅普诺夫理论证明了系统巩固速度的一个基本限制。
Phys Rev Res. 2025 Apr-Jun;7(2). doi: 10.1103/physrevresearch.7.023174. Epub 2025 May 21.
2
Non-apical plateau potentials and persistent firing induced by metabotropic cholinergic modulation in layer 2/3 pyramidal cells in the rat prefrontal cortex.大鼠前额叶皮层第2/3层锥体细胞中代谢型胆碱能调制诱导的非尖峰平台电位和持续放电
PLoS One. 2024 Dec 10;19(12):e0314652. doi: 10.1371/journal.pone.0314652. eCollection 2024.
3
A whole-task brain model of associative recognition that accounts for human behavior and neuroimaging data.

本文引用的文献

1
Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail.基于尖峰的连续状态和动作空间中的强化学习:当策略梯度方法失败时。
PLoS Comput Biol. 2009 Dec;5(12):e1000586. doi: 10.1371/journal.pcbi.1000586. Epub 2009 Dec 4.
2
Synaptic depolarization is more effective than back-propagating action potentials during induction of associative long-term potentiation in hippocampal pyramidal neurons.在海马锥体神经元中诱导联合性长时程增强时,突触去极化比反向传播动作电位更有效。
J Neurosci. 2009 Mar 11;29(10):3233-41. doi: 10.1523/JNEUROSCI.6000-08.2009.
3
Memory without feedback in a neural network.
一个全任务大脑联想识别模型,可以解释人类行为和神经影像学数据。
PLoS Comput Biol. 2023 Sep 8;19(9):e1011427. doi: 10.1371/journal.pcbi.1011427. eCollection 2023 Sep.
4
Exploiting semantic information in a spiking neural SLAM system.在脉冲神经同步定位与地图构建系统中利用语义信息。
Front Neurosci. 2023 Jul 5;17:1190515. doi: 10.3389/fnins.2023.1190515. eCollection 2023.
5
Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms.基于生物学的计算:神经细节与动力学如何适用于实现各种算法。
Brain Sci. 2023 Jan 31;13(2):245. doi: 10.3390/brainsci13020245.
6
Constructing functional models from biophysically-detailed neurons.从具有生物物理细节的神经元构建功能模型。
PLoS Comput Biol. 2022 Sep 8;18(9):e1010461. doi: 10.1371/journal.pcbi.1010461. eCollection 2022 Sep.
7
Unsupervised learning for robust working memory.无监督学习在稳健工作记忆中的应用。
PLoS Comput Biol. 2022 May 2;18(5):e1009083. doi: 10.1371/journal.pcbi.1009083. eCollection 2022 May.
8
Learning to Approximate Functions Using Nb-Doped SrTiO Memristors.利用掺铌钛酸锶忆阻器学习近似函数。
Front Neurosci. 2021 Feb 19;14:627276. doi: 10.3389/fnins.2020.627276. eCollection 2020.
9
Nengo and Low-Power AI Hardware for Robust, Embedded Neurorobotics.用于稳健嵌入式神经机器人技术的Nengo与低功耗人工智能硬件。
Front Neurorobot. 2020 Oct 9;14:568359. doi: 10.3389/fnbot.2020.568359. eCollection 2020.
10
NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods.NengoDL:深度学习与神经形态建模方法的结合。
Neuroinformatics. 2019 Oct;17(4):611-628. doi: 10.1007/s12021-019-09424-z.
神经网络中无反馈的记忆。
Neuron. 2009 Feb 26;61(4):621-34. doi: 10.1016/j.neuron.2008.12.012.
4
Factors influencing pursuit ability in infantile nystagmus syndrome: Target timing and foveation capability.影响婴儿型眼球震颤综合征追踪能力的因素:目标时机和注视能力。
Vision Res. 2009 Jan;49(2):182-9. doi: 10.1016/j.visres.2008.10.007. Epub 2008 Nov 28.
5
Neural integrator: a sandpile model.神经积分器:一种沙堆模型。
Neural Comput. 2008 Oct;20(10):2379-417. doi: 10.1162/neco.2008.12-06-416.
6
Bayesian spiking neurons I: inference.贝叶斯脉冲神经元I:推理
Neural Comput. 2008 Jan;20(1):91-117. doi: 10.1162/neco.2008.20.1.91.
7
Higher-dimensional neurons explain the tuning and dynamics of working memory cells.高维神经元解释了工作记忆细胞的调谐和动力学。
J Neurosci. 2006 Apr 5;26(14):3667-78. doi: 10.1523/JNEUROSCI.4864-05.2006.
8
Mechanism of graded persistent cellular activity of entorhinal cortex layer v neurons.内嗅皮层第V层神经元分级持续性细胞活动的机制
Neuron. 2006 Mar 2;49(5):735-46. doi: 10.1016/j.neuron.2006.01.036.
9
A unified approach to building and controlling spiking attractor networks.构建和控制脉冲吸引子网络的统一方法。
Neural Comput. 2005 Jun;17(6):1276-314. doi: 10.1162/0899766053630332.
10
A controlled attractor network model of path integration in the rat.大鼠路径整合的受控吸引子网络模型。
J Comput Neurosci. 2005 Mar-Apr;18(2):183-203. doi: 10.1007/s10827-005-6558-z.