Independent Researcher, New York, NY, United States.
Front Neural Circuits. 2024 Mar 5;18:1280604. doi: 10.3389/fncir.2024.1280604. eCollection 2024.
A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention toward important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization cross-induction, enabling minicolumn functions such as the creation of powerful hierarchical "hyperneurons" and the internal representation of the external world. We suggest building blocks to extend the microcircuit theory to network-level processing, which, interestingly, yields variants resembling the artificial neural networks currently in use. On a more speculative note, we conjecture that principles of intelligence in universes governed by certain types of physical laws might resemble ours.
智能动物大脑的一个特征是能够学习对一组活跃的神经元输入做出反应,并以行为上合适的一组活跃神经元输出做出反应。此前,有人提出了一个假说,即在新皮层锥体神经元的细胞水平上,这种机制是如何实现的:树突顶树突或体周输入启动“猜测”神经元的放电,而基底树突根据兴奋的突触簇来识别输入模式,簇兴奋强度根据奖励反馈进行调整。这个简单的机制使神经元能够以惊人的智能方式学习对其输入进行分类。在这里,我们修改并扩展了这个假设。我们修改了突触可塑性规则,使其与海马体 CA1 中观察到的行为时间尺度突触可塑性(BTSP)相匹配,使框架更具有生物物理和行为上的合理性。通过反馈连接到新皮层 1 层的树突顶树突,以自愿的方式选择进行猜测放电的神经元,导致树突 Ca spikes 爆发式放电,这被假设为注意力、意识处理的神经相关物。一旦学会,神经元的输入分类就无需自愿或有意识的控制来执行,从而实现了在我们固有的可分类世界中进行分层增量学习的分类。除了自愿的,我们还提出,锥体神经元爆发式放电也可以是无意识的,也可以通过树突顶树突输入引发,将注意力吸引到重要的线索上,如新奇和有害刺激。我们根据其兴奋途径将新皮层锥体神经元的兴奋分为四类:注意力与自动,自愿/获得与无意识。此外,我们假设,锥体神经元的树突内的树突束通过去极化交叉诱导耦合在一起,使树突束功能如创建强大的分层“超神经元”和对外界的内部表示。我们建议构建块来扩展微电路理论到网络级处理,这很有趣,产生了类似于当前使用的人工神经网络的变体。更具推测性的是,我们推测在受某些类型物理定律支配的宇宙中,智能的原则可能与我们的相似。