• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

递归神经网络中最大后验估计与边际推理的统一

Unification of MAP Estimation and Marginal Inference in Recurrent Neural Networks.

作者信息

Yu Zhaofei, Chen Feng, Deng Fei

出版信息

IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5761-5766. doi: 10.1109/TNNLS.2018.2805813. Epub 2018 Mar 9.

DOI:10.1109/TNNLS.2018.2805813
PMID:29994079
Abstract

Numerous experimental data show that human brain can represent probability distributions and perform Bayesian inference. However, it remains unclear how the brain implements probabilistic inference in the form of neural circuits. Several models have been proposed that aim at explaining how the network of neurons carry out maximum a posterior inference (MAP) estimation and marginal inference, but they are all task specific in that they treat MAP estimation and marginal inference separately. In this brief, we propose that human brain could implement MAP estimation and marginal inference in the same network of neurons. We illustrate our result in hidden Markov models and prove that a recurrent neural network (RNN) implementation of belief propagation can be tuned to perform approximate Bayesian inference (to provide posterior or conditional distribution over the latent causes of observations) or identify the MAP or peak of the joint distribution. The key tuning parameter is a temperature parameter that controls the precision of probability distributions that are optimized. Theoretical analyses and experimental results demonstrate that RNNs can carry out near-optimal MAP estimation and marginal inference.

摘要

大量实验数据表明,人类大脑能够表征概率分布并进行贝叶斯推理。然而,大脑如何以神经回路的形式实现概率推理仍不清楚。已经提出了几种模型,旨在解释神经元网络如何进行最大后验推理(MAP)估计和边际推理,但它们都是特定于任务的,因为它们分别处理MAP估计和边际推理。在本简报中,我们提出人类大脑可以在同一神经元网络中实现MAP估计和边际推理。我们在隐马尔可夫模型中说明了我们的结果,并证明信念传播的递归神经网络(RNN)实现可以进行调整,以执行近似贝叶斯推理(以提供观测潜在原因的后验或条件分布)或识别联合分布的MAP或峰值。关键的调整参数是一个温度参数,它控制着优化后的概率分布的精度。理论分析和实验结果表明,RNN可以进行接近最优的MAP估计和边际推理。

相似文献

1
Unification of MAP Estimation and Marginal Inference in Recurrent Neural Networks.递归神经网络中最大后验估计与边际推理的统一
IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5761-5766. doi: 10.1109/TNNLS.2018.2805813. Epub 2018 Mar 9.
2
Probabilistic inference of binary Markov random fields in spiking neural networks through mean-field approximation.通过平均场近似对尖峰神经网络中的二元马尔可夫随机场进行概率推理。
Neural Netw. 2020 Jun;126:42-51. doi: 10.1016/j.neunet.2020.03.003. Epub 2020 Mar 9.
3
Marginal Bayesian Posterior Inference using Recurrent Neural Networks with Application to Sequential Models.使用递归神经网络的边际贝叶斯后验推断及其在序列模型中的应用。
Stat Sin. 2023 May;33(SI):1507-1532. doi: 10.5705/ss.202020.0348.
4
Bayesian Inference and Online Learning in Poisson Neuronal Networks.泊松神经元网络中的贝叶斯推理与在线学习
Neural Comput. 2016 Aug;28(8):1503-26. doi: 10.1162/NECO_a_00851. Epub 2016 Jun 27.
5
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks.认识递归神经网络(rRNN):递归神经网络的贝叶斯推理。
Biol Cybern. 2012 Jul;106(4-5):201-17. doi: 10.1007/s00422-012-0490-x. Epub 2012 May 12.
6
Dynamical Mechanism of Sampling-Based Probabilistic Inference Under Probabilistic Population Codes.基于概率群体编码的采样概率推理的动力学机制。
Neural Comput. 2022 Feb 17;34(3):804-827. doi: 10.1162/neco_a_01477.
7
Emergent Inference of Hidden Markov Models in Spiking Neural Networks Through Winner-Take-All.通过胜者通吃对尖峰神经网络中隐藏马尔可夫模型的紧急推断。
IEEE Trans Cybern. 2020 Mar;50(3):1347-1354. doi: 10.1109/TCYB.2018.2871144. Epub 2018 Oct 3.
8
Model Selection and Parameter Inference in Phylogenetics Using Nested Sampling.使用嵌套抽样进行系统发育学中的模型选择和参数推断。
Syst Biol. 2019 Mar 1;68(2):219-233. doi: 10.1093/sysbio/syy050.
9
Bayesian inference with probabilistic population codes.基于概率群体编码的贝叶斯推理。
Nat Neurosci. 2006 Nov;9(11):1432-8. doi: 10.1038/nn1790. Epub 2006 Oct 22.
10
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.在使用非概率反馈训练的通用神经网络中进行高效概率推理。
Nat Commun. 2017 Jul 26;8(1):138. doi: 10.1038/s41467-017-00181-8.