• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

输入相关性会阻碍平衡发放率网络中的混沌抑制和学习。

Input correlations impede suppression of chaos and learning in balanced firing-rate networks.

机构信息

Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America.

The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy.

出版信息

PLoS Comput Biol. 2022 Dec 5;18(12):e1010590. doi: 10.1371/journal.pcbi.1010590. eCollection 2022 Dec.

DOI:10.1371/journal.pcbi.1010590
PMID:36469504
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9754616/
Abstract

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.

摘要

神经回路表现出复杂的活动模式,无论是自发的还是由外部刺激引起的。神经回路中的信息编码和学习取决于时变刺激能够控制自发网络活动的程度。我们表明,在平衡状态下的发放率网络中,对递归动力学的外部控制,即对内源性混沌变异性的抑制,强烈依赖于输入中的相关性。平衡网络的一个显著特点是,由于共同的外部输入被递归反馈动态抵消,因此,与通过独立输入相比,通过共同输入到每个神经元来抑制混沌要困难得多。为了研究这一现象,我们为驱动网络开发了一种非平稳动态平均场理论。该理论解释了活动统计数据和最大 Lyapunov 指数如何取决于输入的频率和幅度、递归耦合强度以及网络大小,无论是共同输入还是独立输入。我们进一步表明,不相关的输入有助于平衡网络中的学习。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/5631e816b9c6/pcbi.1010590.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/892606c19b25/pcbi.1010590.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/9b9ad721e5f4/pcbi.1010590.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/7fd58c657c18/pcbi.1010590.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/a444fff37af6/pcbi.1010590.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/fb5baad3c5e9/pcbi.1010590.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/8e0e0934a8ed/pcbi.1010590.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/46cf73158721/pcbi.1010590.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/3a7974be19d5/pcbi.1010590.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/93426604c55a/pcbi.1010590.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/5631e816b9c6/pcbi.1010590.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/892606c19b25/pcbi.1010590.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/9b9ad721e5f4/pcbi.1010590.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/7fd58c657c18/pcbi.1010590.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/a444fff37af6/pcbi.1010590.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/fb5baad3c5e9/pcbi.1010590.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/8e0e0934a8ed/pcbi.1010590.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/46cf73158721/pcbi.1010590.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/3a7974be19d5/pcbi.1010590.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/93426604c55a/pcbi.1010590.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fecd/9754616/5631e816b9c6/pcbi.1010590.g010.jpg

相似文献

1
Input correlations impede suppression of chaos and learning in balanced firing-rate networks.输入相关性会阻碍平衡发放率网络中的混沌抑制和学习。
PLoS Comput Biol. 2022 Dec 5;18(12):e1010590. doi: 10.1371/journal.pcbi.1010590. eCollection 2022 Dec.
2
Stimulus-dependent suppression of chaos in recurrent neural networks.循环神经网络中依赖刺激的混沌抑制
Phys Rev E Stat Nonlin Soft Matter Phys. 2010 Jul;82(1 Pt 1):011903. doi: 10.1103/PhysRevE.82.011903. Epub 2010 Jul 7.
3
Transitions between asynchronous and synchronous states: a theory of correlations in small neural circuits.异步与同步状态之间的转换:小神经回路中的相关性理论
J Comput Neurosci. 2018 Feb;44(1):25-43. doi: 10.1007/s10827-017-0667-3. Epub 2017 Nov 10.
4
Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.基于电流和电导的积分和发放网络之间神经相互作用动态的比较。
Front Neural Circuits. 2014 Mar 5;8:12. doi: 10.3389/fncir.2014.00012. eCollection 2014.
5
Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex.输入相关性在塑造新皮层诱发活动的变异性和噪声相关性中的作用。
J Neurosci. 2015 Jun 3;35(22):8611-25. doi: 10.1523/JNEUROSCI.4536-14.2015.
6
Decorrelation of neural-network activity by inhibitory feedback.通过抑制性反馈使神经网络活动去相关。
PLoS Comput Biol. 2012 Aug;8(8):e1002596. doi: 10.1371/journal.pcbi.1002596. Epub 2012 Aug 2.
7
Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.脉冲神经元抑制主导网络中的方向选择性:单个神经元特性和网络动力学的影响。
PLoS Comput Biol. 2015 Jan 8;11(1):e1004045. doi: 10.1371/journal.pcbi.1004045. eCollection 2015 Jan.
8
Robust timing and motor patterns by taming chaos in recurrent neural networks.通过驯服递归神经网络中的混沌来实现强健的时间和运动模式。
Nat Neurosci. 2013 Jul;16(7):925-33. doi: 10.1038/nn.3405. Epub 2013 May 26.
9
Resonance with subthreshold oscillatory drive organizes activity and optimizes learning in neural networks.亚阈值振荡驱动的共振组织神经网络中的活动并优化学习。
Proc Natl Acad Sci U S A. 2018 Mar 27;115(13):E3017-E3025. doi: 10.1073/pnas.1716933115. Epub 2018 Mar 15.
10
Encoding in Balanced Networks: Revisiting Spike Patterns and Chaos in Stimulus-Driven Systems.平衡网络中的编码:重新审视刺激驱动系统中的尖峰模式与混沌
PLoS Comput Biol. 2016 Dec 14;12(12):e1005258. doi: 10.1371/journal.pcbi.1005258. eCollection 2016 Dec.

引用本文的文献

1
Balanced state of networks of winner-take-all units.赢者通吃单元网络的平衡状态
PLoS Comput Biol. 2025 Jun 11;21(6):e1013081. doi: 10.1371/journal.pcbi.1013081. eCollection 2025 Jun.
2
Desegregation of neuronal predictive processing.神经元预测处理的去隔离
bioRxiv. 2024 Aug 7:2024.08.05.606684. doi: 10.1101/2024.08.05.606684.
3
Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.探索触发器存储器及其他:利用关键见解训练递归神经网络

本文引用的文献

1
Sparse balance: Excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights.稀疏平衡:具有小偏置电流和广泛分布的突触权重的兴奋抑制网络。
PLoS Comput Biol. 2022 Feb 9;18(2):e1008836. doi: 10.1371/journal.pcbi.1008836. eCollection 2022 Feb.
2
What is the dynamical regime of cerebral cortex?大脑皮层的动力学状态是什么?
Neuron. 2021 Nov 3;109(21):3373-3391. doi: 10.1016/j.neuron.2021.07.031. Epub 2021 Aug 30.
3
Inhibition stabilization is a widespread property of cortical networks.抑制稳定是皮质网络的普遍特性。
Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.
4
Multitasking via baseline control in recurrent neural networks.通过递归神经网络中的基线控制进行多任务处理。
Proc Natl Acad Sci U S A. 2023 Aug 15;120(33):e2304394120. doi: 10.1073/pnas.2304394120. Epub 2023 Aug 7.
Elife. 2020 Jun 29;9:e54875. doi: 10.7554/eLife.54875.
4
Training dynamically balanced excitatory-inhibitory networks.训练动态平衡的兴奋-抑制网络。
PLoS One. 2019 Aug 8;14(8):e0220547. doi: 10.1371/journal.pone.0220547. eCollection 2019.
5
How single neuron properties shape chaotic dynamics and signal transmission in random neural networks.单个神经元特性如何塑造随机神经网络中的混沌动力学和信号传输。
PLoS Comput Biol. 2019 Jun 10;15(6):e1007122. doi: 10.1371/journal.pcbi.1007122. eCollection 2019 Jun.
6
Learning recurrent dynamics in spiking networks.学习尖峰网络中的循环动力学。
Elife. 2018 Sep 20;7:e37124. doi: 10.7554/eLife.37124.
7
Inferring single-trial neural population dynamics using sequential auto-encoders.使用序列自编码器推断单试神经群体动力学。
Nat Methods. 2018 Oct;15(10):805-815. doi: 10.1038/s41592-018-0109-9. Epub 2018 Sep 17.
8
Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks.在低秩递归神经网络中连接连通性、动态和计算。
Neuron. 2018 Aug 8;99(3):609-623.e29. doi: 10.1016/j.neuron.2018.07.003. Epub 2018 Jul 26.
9
full-FORCE: A target-based method for training recurrent networks.全强制:一种用于训练循环网络的基于目标的方法。
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
10
A canonical neural mechanism for behavioral variability.行为变异性的规范神经机制。
Nat Commun. 2017 May 22;8:15415. doi: 10.1038/ncomms15415.