The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America.
Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America.
PLoS Comput Biol. 2020 Aug 7;16(8):e1007659. doi: 10.1371/journal.pcbi.1007659. eCollection 2020 Aug.
The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.
大脑由许多相互连接的网络组成,这些网络的活动具有时变、部分自主的特点。尽管存在多种噪声和变化源,但活动最终必须收敛到一个稳定、可重复的状态(或状态序列),以便其计算具有意义。我们从控制理论的角度出发,通过将收缩分析应用于递归神经网络来解决这个问题。这使我们能够找到在具有生物现实动态的多个连接网络中实现稳定性的机制,包括突触可塑性和时变输入。这些机制包括抑制性赫布可塑性、兴奋性反赫布可塑性、突触稀疏和兴奋性-抑制性平衡。我们的研究结果揭示了尽管存在生物复杂性,但如何实现稳定计算。至关重要的是,我们的分析不仅限于分析状态空间中固定几何对象(例如点、线、面)的稳定性,而是分析可能复杂且时变的状态轨迹的稳定性。