• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

全强制:一种用于训练循环网络的基于目标的方法。

full-FORCE: A target-based method for training recurrent networks.

作者信息

DePasquale Brian, Cueva Christopher J, Rajan Kanaka, Escola G Sean, Abbott L F

机构信息

Department of Neuroscience, Zuckerman Institute, Columbia University, New York, NY, United States of America.

Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, United States of America.

出版信息

PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.

DOI:10.1371/journal.pone.0191527
PMID:29415041
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5802861/
Abstract

Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable "target" dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neurons and greater noise robustness than traditional least-squares (FORCE) approaches. In addition, we show how introducing additional input signals into the target-generating network, which act as task hints, greatly extends the range of tasks that can be learned and provides control over the complexity and nature of the dynamics of the trained, task-performing network.

摘要

训练有素的循环网络是用于对动态神经计算进行建模的强大工具。我们提出了一种基于目标的方法,用于修改循环网络的全连接矩阵,以训练其执行涉及时间复杂输入/输出变换的任务。该方法在训练期间引入第二个网络,以提供适用于执行任务的合适“目标”动态。由于它利用了全循环连接,该方法产生的网络比传统的最小二乘法(FORCE)方法使用更少的神经元来执行任务,并且具有更高的噪声鲁棒性。此外,我们展示了如何将额外的输入信号引入目标生成网络,这些信号作为任务提示,极大地扩展了可以学习的任务范围,并提供了对训练后的任务执行网络动态的复杂性和性质的控制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/561a69e3026f/pone.0191527.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/bb0458a73cc1/pone.0191527.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/5228b71ae97f/pone.0191527.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/c64acee78510/pone.0191527.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/237577990727/pone.0191527.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/b12647e05f9d/pone.0191527.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/d82cd1128da5/pone.0191527.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/45540015ba72/pone.0191527.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/561a69e3026f/pone.0191527.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/bb0458a73cc1/pone.0191527.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/5228b71ae97f/pone.0191527.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/c64acee78510/pone.0191527.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/237577990727/pone.0191527.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/b12647e05f9d/pone.0191527.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/d82cd1128da5/pone.0191527.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/45540015ba72/pone.0191527.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3672/5802861/561a69e3026f/pone.0191527.g008.jpg

相似文献

1
full-FORCE: A target-based method for training recurrent networks.全强制:一种用于训练循环网络的基于目标的方法。
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
2
Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals.使用 EEG 信号上的改进全力法对尖峰循环神经网络的时空动态进行研究。
Sci Rep. 2022 Feb 21;12(1):2896. doi: 10.1038/s41598-022-06573-1.
3
Transferring learning from external to internal weights in echo-state networks with sparse connectivity.从具有稀疏连接的回声状态网络的外部权重转移到内部权重。
PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.
4
A Geometrical Analysis of Global Stability in Trained Feedback Networks.训练反馈网络全局稳定性的几何分析。
Neural Comput. 2019 Jun;31(6):1139-1182. doi: 10.1162/neco_a_01187. Epub 2019 Apr 12.
5
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
6
A scalable implementation of the recursive least-squares algorithm for training spiking neural networks.一种用于训练脉冲神经网络的递归最小二乘算法的可扩展实现。
Front Neuroinform. 2023 Jun 27;17:1099510. doi: 10.3389/fninf.2023.1099510. eCollection 2023.
7
Learning recurrent dynamics in spiking networks.学习尖峰网络中的循环动力学。
Elife. 2018 Sep 20;7:e37124. doi: 10.7554/eLife.37124.
8
Learning the Synaptic and Intrinsic Membrane Dynamics Underlying Working Memory in Spiking Neural Network Models.学习尖峰神经网络模型中工作记忆的突触和内在膜动力学。
Neural Comput. 2021 Nov 12;33(12):3264-3287. doi: 10.1162/neco_a_01409.
9
The covariance perceptron: A new paradigm for classification and processing of time series in recurrent neuronal networks.协方差感知机:一种用于递归神经元网络中时间序列分类和处理的新范例。
PLoS Comput Biol. 2020 Oct 12;16(10):e1008127. doi: 10.1371/journal.pcbi.1008127. eCollection 2020 Oct.
10
Training Spiking Neural Networks in the Strong Coupling Regime.在强耦合 regime 中训练尖峰神经网络。
Neural Comput. 2021 Apr 13;33(5):1199-1233. doi: 10.1162/neco_a_01379.

引用本文的文献

1
Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks.温和地驯服混乱:循环神经网络中的一种预测对齐学习规则。
Nat Commun. 2025 Jul 23;16(1):6784. doi: 10.1038/s41467-025-61309-9.
2
Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.对FORCE训练的脉冲神经网络和速率神经网络的比较表明,脉冲神经网络在存在噪声的跨试验发放率情况下学习缓慢。
PLoS Comput Biol. 2025 Jul 21;21(7):e1013224. doi: 10.1371/journal.pcbi.1013224. eCollection 2025 Jul.
3
Stable recurrent dynamics in heterogeneous neuromorphic computing systems using excitatory and inhibitory plasticity.

本文引用的文献

1
Driving reservoir models with oscillations: a solution to the extreme structural sensitivity of chaotic networks.用振荡驱动储库模型:解决混沌网络极端结构敏感性的一种方法。
J Comput Neurosci. 2016 Dec;41(3):305-322. doi: 10.1007/s10827-016-0619-3. Epub 2016 Sep 2.
2
Recurrent Network Models of Sequence Generation and Memory.序列生成与记忆的循环网络模型。
Neuron. 2016 Apr 6;90(1):128-42. doi: 10.1016/j.neuron.2016.02.009. Epub 2016 Mar 10.
3
Building functional networks of spiking model neurons.构建脉冲模型神经元的功能网络。
利用兴奋性和抑制性可塑性在异构神经形态计算系统中实现稳定的循环动力学。
Nat Commun. 2025 Jul 1;16(1):5522. doi: 10.1038/s41467-025-60697-2.
4
CONSTRUCTING BIOLOGICALLY CONSTRAINED RNNS VIA DALE'S BACKPROP AND TOPOLOGICALLY-INFORMED PRUNING.通过戴尔反向传播和拓扑信息剪枝构建生物约束循环神经网络
bioRxiv. 2025 Jan 13:2025.01.09.632231. doi: 10.1101/2025.01.09.632231.
5
Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time.学习表达类似于奖励预测误差的多巴胺能活动需要时间的可塑性表示。
Nat Commun. 2024 Jul 12;15(1):5856. doi: 10.1038/s41467-024-50205-3.
6
Probing latent brain dynamics in Alzheimer's disease via recurrent neural network.通过循环神经网络探究阿尔茨海默病中的潜在脑动力学。
Cogn Neurodyn. 2024 Jun;18(3):1183-1195. doi: 10.1007/s11571-023-09981-9. Epub 2023 Jun 14.
7
The impact of spike timing precision and spike emission reliability on decoding accuracy.尖峰定时精度和尖峰发射可靠性对解码精度的影响。
Sci Rep. 2024 May 8;14(1):10536. doi: 10.1038/s41598-024-58524-7.
8
Memorable first impressions.令人难忘的第一印象。
Elife. 2024 May 3;13:e98274. doi: 10.7554/eLife.98274.
9
Robust compression and detection of epileptiform patterns in ECoG using a real-time spiking neural network hardware framework.使用实时尖峰神经网络硬件框架对 ECoG 中的癫痫样模式进行稳健压缩和检测。
Nat Commun. 2024 Apr 16;15(1):3255. doi: 10.1038/s41467-024-47495-y.
10
Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.探索触发器存储器及其他:利用关键见解训练递归神经网络
Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.
Nat Neurosci. 2016 Mar;19(3):350-5. doi: 10.1038/nn.4241.
4
A neural network that finds a naturalistic solution for the production of muscle activity.一种为肌肉活动产生寻找自然主义解决方案的神经网络。
Nat Neurosci. 2015 Jul;18(7):1025-33. doi: 10.1038/nn.4042. Epub 2015 Jun 15.
5
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
6
Neural circuits as computational dynamical systems.神经回路作为计算动力系统。
Curr Opin Neurobiol. 2014 Apr;25:156-63. doi: 10.1016/j.conb.2014.01.008. Epub 2014 Feb 5.
7
A modeling framework for deriving the structural and functional architecture of a short-term memory microcircuit.用于推导出短期记忆微电路的结构和功能架构的建模框架。
Neuron. 2013 Sep 4;79(5):987-1000. doi: 10.1016/j.neuron.2013.06.041.
8
Robust timing and motor patterns by taming chaos in recurrent neural networks.通过驯服递归神经网络中的混沌来实现强健的时间和运动模式。
Nat Neurosci. 2013 Jul;16(7):925-33. doi: 10.1038/nn.3405. Epub 2013 May 26.
9
From fixed points to chaos: three models of delayed discrimination.从不动点到混沌:三种延迟辨别模型。
Prog Neurobiol. 2013 Apr;103:214-22. doi: 10.1016/j.pneurobio.2013.02.002. Epub 2013 Feb 21.
10
Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks.打开黑箱:高维递归神经网络中的低维动力学。
Neural Comput. 2013 Mar;25(3):626-49. doi: 10.1162/NECO_a_00409. Epub 2012 Dec 28.