Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France.
Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France.
Neuron. 2017 Jun 7;94(5):969-977. doi: 10.1016/j.neuron.2017.05.016.
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.
理解大脑如何利用嘈杂的尖峰活动可靠、高效、稳健地计算函数,是神经科学的一个基本挑战。大多数感觉和运动任务都可以被描述为动力系统,并且可以通过调整递归生物神经网络中的连接权重来学习。然而,这被递归网络中的信用分配问题大大复杂化,例如,仅根据突触的局部可访问量,无法确定每个连接对全局输出误差的贡献。通过自适应控制理论和有效编码理论的结合,我们提出,只要神经电路能够关联两个实验建立的神经机制,就可以使用局部突触可塑性规则来学习复杂的动态任务。首先,它们应该接收自上而下的反馈,驱动它们的活动和突触可塑性。其次,抑制性中间神经元应该在电路中保持兴奋和抑制之间的紧密平衡。由此产生的网络可以学习任意的动力系统,并产生与实验中观察到的一样多变的不规则尖峰序列。然而,单个神经元的这种可变性可能隐藏了群体水平上极其高效和稳健的计算。