Voegtlin Thomas, Dominey Peter F
INRIA, Campus Scientifique, B.P. 239, F-54506 Vandoeuvre-Les-Nancy Cedex, France.
Neural Netw. 2005 Sep;18(7):878-95. doi: 10.1016/j.neunet.2005.01.005.
Connectionist networks have been criticized for their inability to represent complex structures with systematicity. That is, while they can be trained to represent and manipulate complex objects made of several constituents, they generally fail to generalize to novel combinations of the same constituents. This paper presents a modification of Pollack's Recursive Auto-Associative Memory (RAAM), that addresses this criticism. The network uses linear units and is trained with Oja's rule, in which it generalizes PCA to tree-structured data. Learned representations may be linearly combined, in order to represent new complex structures. This results in unprecedented generalization capabilities. Capacity is orders of magnitude higher than that of a RAAM trained with back-propagation. Moreover, regularities of the training set are preserved in the new formed objects. The formation of new structures displays developmental effects similar to those observed in children when learning to generalize about the argument structure of verbs.
联结主义网络因其无法系统地表示复杂结构而受到批评。也就是说,虽然它们可以通过训练来表示和操纵由几个成分组成的复杂对象,但它们通常无法推广到相同成分的新组合。本文提出了对波拉克递归自联想记忆(RAAM)的一种修改,以回应这一批评。该网络使用线性单元,并采用奥贾规则进行训练,其中将主成分分析(PCA)推广到树状结构数据。学习到的表示可以进行线性组合,以表示新的复杂结构。这带来了前所未有的泛化能力。其容量比使用反向传播训练的RAAM高出几个数量级。此外,训练集的规律性在新形成的对象中得以保留。新结构的形成显示出与儿童学习动词的论元结构泛化时所观察到的类似的发展效应。