Department of Psychology and Program in Cognitive Science, University of Connecticut, Storrs, CT 06269, USA.
Top Cogn Sci. 2013 Jul;5(3):634-67. doi: 10.1111/tops.12036. Epub 2013 Jun 24.
We examine two connectionist networks-a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)-that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems to facilitate learning, and note that the connectionist learning of these cases has a continuous metamorphosis property that looks very different from what is achievable with symbolic encodings. We identify a property-ragged progressive generalization-which helps make this difference clearer. We suggest two conclusions. First, the fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way-this helps remove the black box character of connectionist networks and indicates how the theory they support is different from symbolic approaches. Second, the findings indicate the value of future, linked mathematical and empirical work on these models-something that is more possible now than it was 10 years ago.
我们研究了两种连接主义网络——分形学习神经网络(FLNN)和简单递归网络(SRN),它们被训练来处理中心嵌入符号序列。以前的工作提供了证据,表明在无限状态语言上训练的连接主义网络倾向于形成分形编码。大多数此类工作都集中在简单的计数递归情况(例如,anbn)上,这些情况与自然语言语法中看到的复杂递归模式不可比。在这里,我们考虑指数状态增长情况(包括镜像递归),描述了一种似乎有助于学习的新训练方案,并注意到这些情况下的连接主义学习具有连续的变态性质,与符号编码所能达到的性质非常不同。我们确定了一个属性——粗糙的渐进概括——这有助于使这种差异更加明显。我们提出了两个结论。首先,这些更复杂的学习案例的分形分析揭示了以一种有原则的方式比较连接主义网络和语法结构的符号模型的可能性——这有助于消除连接主义网络的黑箱性质,并表明它们所支持的理论与符号方法有何不同。其次,研究结果表明,对这些模型进行未来的、相互关联的数学和实证研究是有价值的——这比 10 年前更有可能。