Carpenter Gail A, Milenova Boriana L
Department of Cognitive and Neural Systems, Boston University, Boston, Massachusetts 02215, USA.
Neural Comput. 2002 Apr;14(4):873-88. doi: 10.1162/089976602317318992.
Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse long-term potentiation (LTP) measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. This observed change in frequency dependence during synaptic potentiation is called redistribution of synaptic efficacy (RSE). RSE is here seen as the local realization of a global design principle in a neural network for pattern coding. The underlying computational model posits an adaptive threshold rather than a multiplicative weight as the elementary unit of long-term memory. A distributed instar learning law allows thresholds to increase only monotonically, but adaptation has a bidirectional effect on the model postsynaptic potential. At each synapse, threshold increases implement pattern selectivity via a frequency-dependent signal component, while a complementary frequency-independent component nonspecifically strengthens the path. This synaptic balance produces changes in frequency dependence that are robustly similar to those observed by Markram and Tsodyks. The network design therefore suggests a functional purpose for RSE, which, by helping to bound total memory change, supports a distributed coding scheme that is stable with fast as well as slow learning. Multiplicative weights have served as a cornerstone for models of physiological data and neural systems for decades. Although the model discussed here does not implement detailed physiology of synaptic transmission, its new learning laws operate in a network architecture that suggests how recently discovered synaptic computations such as RSE may help produce new network capabilities such as learning that is fast, stable, and distributed.
马克勒姆和措迪克斯通过研究发现,单脉冲长期增强(LTP)测量中观察到的突触效能升高,在高频测试脉冲下会消失,这对传统观点——LTP反映了总体增益增加——提出了严峻挑战。这种在突触增强过程中观察到的频率依赖性变化被称为突触效能再分配(RSE)。在这里,RSE被视为神经网络中用于模式编码的全局设计原则的局部体现。底层计算模型假定自适应阈值而非乘法权重作为长期记忆的基本单元。分布式星状学习法则使阈值仅单调增加,但适应对模型的突触后电位有双向影响。在每个突触处,阈值增加通过频率依赖性信号成分实现模式选择性,而互补的频率无关成分则非特异性地强化路径。这种突触平衡产生的频率依赖性变化与马克勒姆和措迪克斯观察到的变化极为相似。因此,该网络设计为RSE提出了一个功能目的,即通过帮助限制总记忆变化,支持一种在快速和慢速学习中都稳定的分布式编码方案。几十年来,乘法权重一直是生理数据和神经系统模型的基石。尽管这里讨论的模型没有实现突触传递的详细生理学机制,但其新的学习法则在一种网络架构中运行,这表明最近发现的诸如RSE等突触计算可能有助于产生新的网络能力,如快速、稳定和分布式的学习。