Suppr超能文献

贝叶斯尖峰神经元的近似、计算高效在线学习。

Approximate, computationally efficient online learning in Bayesian spiking neurons.

机构信息

NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, and the Centre for Neural Engineering, The University of Melbourne, Victoria 3010, Australia

出版信息

Neural Comput. 2014 Mar;26(3):472-96. doi: 10.1162/NECO_a_00560. Epub 2013 Dec 9.

Abstract

Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

摘要

贝叶斯尖峰神经元(BSN)为神经元如何进行推理和学习提供了一种概率解释。BSN 的在线学习通常涉及基于最大似然期望最大化(ML-EM)的参数估计,这在计算上很慢,限制了研究 BSN 网络的潜力。提出了一种在线学习算法,快速学习(FL),与基准 ML-EM 相比,在输入数量增加时(例如,对于 20 个输入,运行时间快 16.5 倍),它在固定数量的时间步中具有更高的计算效率。虽然 ML-EM 似乎比 FL 快 2.0 到 3.6 倍,但 ML-EM 的计算成本意味着 ML-EM 需要更长的时间才能模拟收敛,而不是 FL。FL 还提供了合理的收敛性能,对远离真实参数值的参数估计的初始化具有鲁棒性。然而,参数估计取决于真实参数值的范围。尽管如此,对于生理上有意义的参数值范围,FL 给出了非常好的平均估计精度,尽管它是近似的。因此,FL 算法为探索 BSN 网络提供了一种有效的工具,补充了 ML-EM,以便更深入地了解它们的生物学相关性。此外,FL 算法的简单性意味着它可以很容易地在神经形态 VLSI 中实现,以便可以利用 BSN 的节能尖峰编码。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验