Nguyen Katrina P, Person Abigail L
Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
Nat Rev Neurosci. 2025 Jun 16. doi: 10.1038/s41583-025-00936-z.
The rise of the deep neural network as the workhorse of artificial intelligence has brought increased attention to how network architectures serve specialized functions. The cerebellum, with its largely shallow, feedforward architecture, provides a curious example of such a specialized network. Within the cerebellum, tiny supernumerary granule cells project to a monolayer of giant Purkinje neurons that reweight synaptic inputs under the instructive influence of a unitary synaptic input from climbing fibres. What might this predominantly feedforward organization confer computationally? Here we review evidence for and against the hypothesis that the cerebellum learns basic associative feedforward control policies to speed up motor control and learning. We contrast and link this feedforward control framework with another prominent set of theories proposing that the cerebellum computes internal models. Ultimately, we suggest that the cerebellum may implement control through mechanisms that resemble internal models but involve model-free implicit mappings of high-dimensional sensorimotor contexts to motor output.
深度神经网络作为人工智能的主力军的兴起,使得人们越来越关注网络架构如何发挥特定功能。小脑具有基本为浅层的前馈架构,为这种特殊网络提供了一个有趣的例子。在小脑中,微小的多余颗粒细胞投射到一层巨大的浦肯野神经元,这些浦肯野神经元在来自攀缘纤维的单一突触输入的指导影响下重新加权突触输入。这种主要的前馈组织在计算上可能带来什么?在这里,我们回顾了支持和反对小脑学习基本联想前馈控制策略以加速运动控制和学习这一假设的证据。我们将这种前馈控制框架与另一组提出小脑计算内部模型的突出理论进行对比和联系。最终,我们认为小脑可能通过类似于内部模型但涉及从高维感觉运动情境到运动输出的无模型隐式映射的机制来实现控制。