Carpenter Gail A.
Boston University, Boston, U.S.A.
Neural Netw. 1997 Nov;10(8):1473-1494. doi: 10.1016/s0893-6080(97)00004-x.
A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.
介绍了一类用于学习、识别和预测的自适应共振理论(ART)模型,其具有任意分布的代码表示。分布式ART神经网络将胜者全得ART系统的稳定快速学习能力与多层感知器的抗噪声能力和代码压缩能力结合起来。对于胜者全得编码,无监督模型dART简化为模糊ART,监督模型dARTMAP简化为模糊ARTMAP。对于分布式编码,这些网络根据每个编码节点的激活程度自动分配学习到的变化,这允许快速和慢速学习而不会出现灾难性遗忘。分布式ART模型用一个动态权重代替传统神经网络的路径权重,该动态权重等于编码节点激活与自适应阈值之间的修正差值。在学习过程中,阈值根据废用性萎缩原理单调增加。然而,突触水平的单调变化在动态水平上表现为双向变化,其中对于单脉冲或低频测试输入,适应结果类似于长时程增强(LTP),但对于高频测试输入,可能类似于长时程抑制(LTD)。这种矛盾的行为可追溯到相位和紧张性编码信号成分的双重计算特性。并行分布式匹配-重置-搜索过程也有助于稳定记忆。没有匹配-重置-搜索系统,dART就变成了一种分布式竞争学习网络。