Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany.
Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany;
Proc Natl Acad Sci U S A. 2021 Dec 14;118(50). doi: 10.1073/pnas.2021925118.
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
神经网络如何通过局部可塑性机制来高效地表示复杂和高维的输入?代表性学习的经典模型假设前馈权重是通过成对的赫布式可塑性学习的。在这里,我们表明,成对的赫布式可塑性仅在对神经动力学和输入统计的不切实际的要求下才起作用。为了克服这些限制,我们从原理上推导出一种基于电压依赖性突触可塑性规则的学习方案。在这里,递归连接学会在单个树突隔室中局部平衡前馈输入,从而可以调节突触可塑性以学习有效的表示。我们在模拟中证明,即使对于复杂的高维输入和具有抑制性传递延迟的情况,这种学习方案也能稳健地工作,而赫布式可塑性则失败。我们的结果在体内观察到的树突兴奋性-抑制性平衡和电压依赖性突触可塑性之间建立了直接联系,并表明这两者对于表示学习都是至关重要的。