Suppr超能文献

基于统计力学的神经系统建模。

Modeling of a Neural System Based on Statistical Mechanics.

作者信息

Cho Myoung Won, Choi Moo Young

机构信息

Department of Global Medical Science, Sungshin Women's University, Seoul 01133, Korea.

Department of Physics and Astronomy and Center for Theoretical Physics, Seoul National University, Seoul 08826, Korea.

出版信息

Entropy (Basel). 2018 Nov 5;20(11):848. doi: 10.3390/e20110848.

Abstract

The minimization of a free energy is often regarded as the key principle in understanding how the brain works and how the brain structure forms. In particular, a statistical-mechanics-based neural network model is expected to allow one to interpret many aspects of the neural firing and learning processes in terms of general concepts and mechanisms in statistical physics. Nevertheless, the definition of the free energy in a neural system is usually an intricate problem without an evident solution. After the pioneering work by Hopfield, several statistical-mechanics-based models have suggested a variety of definition of the free energy or the entropy in a neural system. Among those, the Feynman machine, proposed recently, presents the free energy of a neural system defined via the Feynman path integral formulation with the explicit time variable. In this study, we first give a brief review of the previous relevant models, paying attention to the troublesome problems in them, and examine how the Feynman machine overcomes several vulnerable points in previous models and derives the outcome of the firing or the learning rule in a (biological) neural system as the extremum state in the free energy. Specifically, the model reveals that the biological learning mechanism, called spike-timing-dependent plasticity, relates to the free-energy minimization principle. Basically, computing and learning mechanisms in the Feynman machine base on the exact spike timings of neurons, such as those in a biological neural system. We discuss the consequence of the adoption of an explicit time variable in modeling a neural system and the application of the free-energy minimization principle to understanding the phenomena in the brain.

摘要

自由能的最小化通常被视为理解大脑如何工作以及大脑结构如何形成的关键原则。特别是,基于统计力学的神经网络模型有望让人从统计物理学的一般概念和机制角度解释神经放电和学习过程的许多方面。然而,神经系统中自由能的定义通常是一个复杂的问题,没有明显的解决方案。在霍普菲尔德的开创性工作之后,一些基于统计力学的模型提出了多种关于神经系统中自由能或熵的定义。其中,最近提出的费曼机器给出了通过带有显式时间变量的费曼路径积分公式定义的神经系统自由能。在本研究中,我们首先简要回顾先前的相关模型,关注其中存在的棘手问题,并研究费曼机器如何克服先前模型中的几个弱点,并将(生物)神经系统中的放电或学习规则的结果推导为自由能中的极值状态。具体而言,该模型表明,称为尖峰时间依赖可塑性的生物学习机制与自由能最小化原则相关。基本上,费曼机器中的计算和学习机制基于神经元的确切尖峰时间,就像生物神经系统中的那样。我们讨论了在对神经系统建模时采用显式时间变量的结果以及自由能最小化原则在理解大脑现象中的应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/124c/7512410/eafaef271279/entropy-20-00848-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验