• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有能量约束的信息量最大化的多时间尺度在线学习规则。

Multiple Timescale Online Learning Rules for Information Maximization with Energetic Constraints.

机构信息

Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, 63130, U.S.A.

出版信息

Neural Comput. 2019 May;31(5):943-979. doi: 10.1162/neco_a_01182. Epub 2019 Mar 18.

DOI:10.1162/neco_a_01182
PMID:30883277
Abstract

A key aspect of the neural coding problem is understanding how representations of afferent stimuli are built through the dynamics of learning and adaptation within neural networks. The infomax paradigm is built on the premise that such learning attempts to maximize the mutual information between input stimuli and neural activities. In this letter, we tackle the problem of such information-based neural coding with an eye toward two conceptual hurdles. Specifically, we examine and then show how this form of coding can be achieved with online input processing. Our framework thus obviates the biological incompatibility of optimization methods that rely on global network awareness and batch processing of sensory signals. Central to our result is the use of variational bounds as a surrogate objective function, an established technique that has not previously been shown to yield online policies. We obtain learning dynamics for both linear-continuous and discrete spiking neural encoding models under the umbrella of linear gaussian decoders. This result is enabled by approximating certain information quantities in terms of neuronal activity via pairwise feedback mechanisms. Furthermore, we tackle the problem of how such learning dynamics can be realized with strict energetic constraints. We show that endowing networks with auxiliary variables that evolve on a slower timescale can allow for the realization of saddle-point optimization within the neural dynamics, leading to neural codes with favorable properties in terms of both information and energy.

摘要

神经编码问题的一个关键方面是理解在神经网络的学习和适应动态中,如何通过输入刺激的动态构建代表。信息最大化范式建立在这样的前提之上,即这种学习试图最大化输入刺激和神经活动之间的互信息。在这封信中,我们着眼于两个概念上的障碍来解决基于信息的神经编码问题。具体来说,我们研究并展示了这种编码形式如何通过在线输入处理来实现。我们的框架因此避免了依赖于全局网络意识和对感觉信号进行批处理的优化方法的生物学不兼容性。我们的结果的核心是使用变分界限作为替代目标函数,这是一种以前没有被证明可以产生在线策略的已有技术。我们在线性高斯解码器的框架下,为线性连续和离散尖峰神经编码模型获得了学习动力学。这一结果是通过通过成对反馈机制用神经元活动来近似某些信息量来实现的。此外,我们还解决了如何在严格的能量约束下实现这种学习动力学的问题。我们表明,为网络赋予在较慢时间尺度上演变的辅助变量,可以在神经动力学中实现鞍点优化,从而在信息和能量方面都具有有利性质的神经编码。

相似文献

1
Multiple Timescale Online Learning Rules for Information Maximization with Energetic Constraints.具有能量约束的信息量最大化的多时间尺度在线学习规则。
Neural Comput. 2019 May;31(5):943-979. doi: 10.1162/neco_a_01182. Epub 2019 Mar 18.
2
Synaptic dynamics: linear model and adaptation algorithm.突触动力学:线性模型与自适应算法。
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.
3
Adaptive synchronization of activities in a recurrent network.循环网络中活动的自适应同步
Neural Comput. 2009 Jun;21(6):1749-75. doi: 10.1162/neco.2009.02-08-708.
4
Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.生物批量归一化:内在可塑性如何提高深度神经网络的学习能力。
PLoS One. 2020 Sep 23;15(9):e0238454. doi: 10.1371/journal.pone.0238454. eCollection 2020.
5
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.镜像脉冲时间依赖可塑性在脉冲神经元网络中实现自动编码器学习。
PLoS Comput Biol. 2015 Dec 3;11(12):e1004566. doi: 10.1371/journal.pcbi.1004566. eCollection 2015 Dec.
6
Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.具有局部侧向抑制的脉冲神经元片层中的分布式贝叶斯计算与自组织学习
PLoS One. 2015 Aug 18;10(8):e0134356. doi: 10.1371/journal.pone.0134356. eCollection 2015.
7
Neuron as a reward-modulated combinatorial switch and a model of learning behavior.神经元作为一种受奖励调节的组合开关和学习行为的模型。
Neural Netw. 2013 Oct;46:62-74. doi: 10.1016/j.neunet.2013.04.010. Epub 2013 May 6.
8
On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.关于具有非线性突触相互作用的脉冲神经元网络学习的样本复杂度
IEEE Trans Neural Netw. 2004 Sep;15(5):995-1001. doi: 10.1109/TNN.2004.832810.
9
Analytical description of the evolution of neural networks: learning rules and complexity.神经网络演化的分析描述:学习规则与复杂性
Biol Cybern. 1999 Aug;81(2):169-75. doi: 10.1007/s004220050553.
10
Learning in neural networks by reinforcement of irregular spiking.通过强化不规则脉冲发放实现神经网络学习。
Phys Rev E Stat Nonlin Soft Matter Phys. 2004 Apr;69(4 Pt 1):041909. doi: 10.1103/PhysRevE.69.041909. Epub 2004 Apr 30.