• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过胜者通吃对尖峰神经网络中隐藏马尔可夫模型的紧急推断。

Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All.

出版信息

IEEE Trans Cybern. 2020 Mar;50(3):1347-1354. doi: 10.1109/TCYB.2018.2871144. Epub 2018 Oct 3.

DOI:10.1109/TCYB.2018.2871144
PMID:30295641
Abstract

Hidden Markov models (HMMs) underpin the solution to many problems in computational neuroscience. However, it is still unclear how to implement inference of HMMs with a network of neurons in the brain. The existing methods suffer from the problem of being nonspiking and inaccurate. Here, we build a precise equivalence between the inference equation of HMMs with time-invariant hidden variables and the dynamics of spiking winner-take-all (WTA) neural networks. We show that the membrane potential of each spiking neuron in the WTA circuit encodes the logarithm of the posterior probability of the hidden variable in each state, and the firing rate of each neuron is proportional to the posterior probability of the HMMs. We prove that the time course of the neural firing rate can implement posterior inference of HMMs. Theoretical analysis and experimental results show that the proposed WTA circuit can get accurate inference results of HMMs.

摘要

隐马尔可夫模型(HMMs)是计算神经科学中许多问题的解决方案的基础。然而,目前尚不清楚如何在大脑中的神经元网络中实现 HMM 的推断。现有的方法存在非尖峰和不准确的问题。在这里,我们建立了隐马尔可夫模型(HMMs)与具有时不变隐藏变量的推断方程和尖峰胜者通吃(WTA)神经网络动力学之间的精确等价关系。我们表明,WTA 电路中每个尖峰神经元的膜电位编码每个状态中隐藏变量的后验概率的对数,并且每个神经元的放电率与 HMM 的后验概率成正比。我们证明了神经元放电率的时间过程可以实现 HMM 的后验推断。理论分析和实验结果表明,所提出的 WTA 电路可以得到 HMM 的准确推断结果。

相似文献

1
Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All.通过胜者通吃对尖峰神经网络中隐藏马尔可夫模型的紧急推断。
IEEE Trans Cybern. 2020 Mar;50(3):1347-1354. doi: 10.1109/TCYB.2018.2871144. Epub 2018 Oct 3.
2
Probabilistic inference of binary Markov random fields in spiking neural networks through mean-field approximation.通过平均场近似对尖峰神经网络中的二元马尔可夫随机场进行概率推理。
Neural Netw. 2020 Jun;126:42-51. doi: 10.1016/j.neunet.2020.03.003. Epub 2020 Mar 9.
3
Computation with spikes in a winner-take-all network.胜者全得网络中基于脉冲的计算。
Neural Comput. 2009 Sep;21(9):2437-65. doi: 10.1162/neco.2009.07-08-829.
4
Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.通过在尖峰神经元随机网络中的采样对一般图形模型进行概率推理。
PLoS Comput Biol. 2011 Dec;7(12):e1002294. doi: 10.1371/journal.pcbi.1002294. Epub 2011 Dec 15.
5
Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.神经动力学作为采样:脉冲神经元的递归网络中随机计算的模型。
PLoS Comput Biol. 2011 Nov;7(11):e1002211. doi: 10.1371/journal.pcbi.1002211. Epub 2011 Nov 3.
6
Hidden Markov models for the stimulus-response relationships of multistate neural systems.多态神经系统刺激-反应关系的隐马尔可夫模型。
Neural Comput. 2011 May;23(5):1071-132. doi: 10.1162/NECO_a_00118. Epub 2011 Feb 7.
7
Bayesian Inference and Online Learning in Poisson Neuronal Networks.泊松神经元网络中的贝叶斯推理与在线学习
Neural Comput. 2016 Aug;28(8):1503-26. doi: 10.1162/NECO_a_00851. Epub 2016 Jun 27.
8
Synthesis of recurrent neural dynamics for monotone inclusion with application to Bayesian inference.用于单调包含的递归神经网络动力学的综合及其在贝叶斯推断中的应用。
Neural Netw. 2020 Nov;131:231-241. doi: 10.1016/j.neunet.2020.07.037. Epub 2020 Aug 12.
9
Stochastic dynamics of a finite-size spiking neural network.有限规模脉冲神经网络的随机动力学
Neural Comput. 2007 Dec;19(12):3262-92. doi: 10.1162/neco.2007.19.12.3262.
10
On the accuracy and computational cost of spiking neuron implementation.关于尖峰神经元实现的准确性和计算成本。
Neural Netw. 2020 Feb;122:196-217. doi: 10.1016/j.neunet.2019.09.026. Epub 2019 Oct 11.

引用本文的文献

1
An FPGA implementation of Bayesian inference with spiking neural networks.基于脉冲神经网络的贝叶斯推理的现场可编程门阵列实现。
Front Neurosci. 2024 Jan 5;17:1291051. doi: 10.3389/fnins.2023.1291051. eCollection 2023.
2
SAM: A Unified Self-Adaptive Multicompartmental Spiking Neuron Model for Learning With Working Memory.SAM:一种用于工作记忆学习的统一自适应多室脉冲神经元模型。
Front Neurosci. 2022 Apr 18;16:850945. doi: 10.3389/fnins.2022.850945. eCollection 2022.
3
Brain-Inspired Hardware Solutions for Inference in Bayesian Networks.
用于贝叶斯网络推理的受脑启发的硬件解决方案
Front Neurosci. 2021 Dec 2;15:728086. doi: 10.3389/fnins.2021.728086. eCollection 2021.