Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, 02792 Seoul, Republic of Korea.
Department of Physics and Astronomy and Center for Theoretical Physics, Seoul National University, 08826 Seoul, Republic of Korea.
Front Biosci (Landmark Ed). 2021 Oct 30;26(10):723-739. doi: 10.52586/4983.
: Ever since the seminal work by McCulloch and Pitts, the theory of neural computation and its philosophical foundation known as 'computationalism' have been central to brain-inspired artificial intelligence (AI) technologies. The present study describes neural dynamics and neural coding approaches to understand the mechanisms of neural computation. The primary focus is to characterize the multiscale nature of logic computations in the brain, which might occur at a single neuron level, between neighboring neurons via synaptic transmission, and at the neural circuit level. : For this, we begin the analysis with simple neuron models to account for basic Boolean logic operations at a single neuron level and then move on to the phenomenological neuron models to explain the neural computation from the viewpoints of neural dynamics and neural coding. The roles of synaptic transmission in neural computation are investigated using biologically realistic multi-compartment neuron models: two representative computational entities, CA1 pyramidal neuron in the hippocampus and Purkinje fiber in the cerebellum, are analyzed in the information-theoretic framework. We then construct two-dimensional mutual information maps, which demonstrate that the synaptic transmission can process not only basic AND/OR Boolean logic operations but also the linearly non-separable XOR function. Finally, we provide an overview of the evolutionary algorithm and discuss its benefits in automated neural circuit design for logic operations. : This study provides a comprehensive perspective on the multiscale logic operations in the brain from both neural dynamics and neural coding viewpoints. It should thus be beneficial for understanding computational principles of the brain and may help design biologically plausible neuron models for AI devices.
自从 McCulloch 和 Pitts 的开创性工作以来,神经计算理论及其被称为“计算主义”的哲学基础一直是受大脑启发的人工智能 (AI) 技术的核心。本研究描述了神经动力学和神经编码方法,以了解神经计算的机制。主要重点是表征大脑中逻辑计算的多尺度性质,这些性质可能发生在单个神经元水平、通过突触传递在相邻神经元之间,以及在神经回路水平上。
为此,我们从简单的神经元模型开始分析,以解释单个神经元水平上的基本布尔逻辑运算,然后转向现象神经元模型,从神经动力学和神经编码的角度解释神经计算。使用具有生物现实性的多腔室神经元模型研究突触传递在神经计算中的作用:在信息论框架中分析了两个代表性的计算实体,海马体中的 CA1 锥体神经元和小脑中的浦肯野纤维。然后,我们构建二维互信息图,证明突触传递不仅可以处理基本的 AND/OR 布尔逻辑运算,还可以处理线性不可分的 XOR 函数。最后,我们对进化算法进行了概述,并讨论了其在自动神经电路设计中的逻辑运算中的优势。
本研究从神经动力学和神经编码的角度提供了对大脑中多尺度逻辑运算的全面视角。因此,它有助于理解大脑的计算原理,并可能有助于设计用于 AI 设备的具有生物学合理性的神经元模型。