Suppr超能文献

整合感知概率模型和交互式神经网络:历史与教程综述。

Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

机构信息

Department of Psychology and Center for Mind, Brain, and Computation, Stanford University Stanford, CA, USA.

出版信息

Front Psychol. 2013 Aug 20;4:503. doi: 10.3389/fpsyg.2013.00503. eCollection 2013.

Abstract

This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

摘要

本文旨在建立感知中上下文效应的显式贝叶斯模型与神经网络模型(尤其是感知的连接主义交互激活 (IA) 模型)之间的和解。本文部分是历史回顾,部分是教程,回顾了理解感知的概率贝叶斯方法以及上下文如何塑造这种方法,还回顾了关于此类概率计算如何在神经网络中进行的想法,重点关注上下文在交互式神经网络中的作用,其中上下行信号都会影响对感官输入的解释。指出当影响这些单元的偏置项和连接权重被设置为适当概率量的对数时,使用逻辑或软最大化激活函数的连接单元可以精确计算贝叶斯后验概率。贝叶斯概念,如先验、似然、(联合和边际)后验、概率匹配和最大化,以及从后验中计算和抽样,都进行了回顾并与神经网络计算相关联。概率和神经网络模型与概率生成模型的概念明确相关联,该模型描述了感知的潜在目标(例如说话者或其他感官刺激源意图的单词)与到达感知者的感官输入之间的关系,用于推断潜在目标。展示了如何对称为多项交互激活 (MIA) 模型的新的 IA 模型进行抽样,以正确从所提出的用于感知单词中的字母的生成模型的联合后验中进行抽样,这表明交互处理完全符合有原则的概率计算。还考虑了这些计算在实际神经系统中实现的方式。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/447a/3747375/8ab61d32c459/fpsyg-04-00503-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验