Neural Comput. 2011 Jan;23(1):251-83. doi: 10.1162/NECO_a_00064. Epub 2010 Oct 21.
We studied the feedforward network proposed by Dandurand et al. (2010), which maps location-specific letter inputs to location-invariant word outputs, probing the hidden layer to determine the nature of the code. Hidden patterns for words were densely distributed, and K-means clustering on single letter patterns produced evidence that the network had formed semi-location-invariant letter representations during training. The possible confound with superseding bigram representations was ruled out, and linear regressions showed that any word pattern was well approximated by a linear combination of its constituent letter patterns. Emulating this code using overlapping holographic representations (Plate, 1995) uncovered a surprisingly acute and useful correspondence with the network, stemming from a broken symmetry in the connection weight matrix and related to the group-invariance theorem (Minsky & Papert, 1969). These results also explain how the network can reproduce relative and transposition priming effects found in humans.
我们研究了 Dandurand 等人(2010)提出的前馈网络,该网络将特定位置的字母输入映射到位置不变的单词输出,探测隐藏层以确定代码的性质。单词的隐藏模式密集分布,对单个字母模式进行 K 均值聚类产生的证据表明,网络在训练过程中形成了半位置不变的字母表示。排除了与替代的双字母表示混淆的可能性,线性回归表明,任何单词模式都可以通过其组成字母模式的线性组合很好地近似。使用重叠全息表示(Plate,1995)模拟该代码揭示了与网络的惊人的准确和有用的对应关系,这源于连接权重矩阵中的破对称和与群不变性定理(Minsky & Papert,1969)有关。这些结果还解释了网络如何再现人类中发现的相对和转位启动效应。