Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
Cogn Sci. 2012 Aug;36(6):1078-101. doi: 10.1111/j.1551-6709.2012.01235.x. Epub 2012 Mar 27.
A recent hypothesis in empirical brain research on language is that the fundamental difference between animal and human communication systems is captured by the distinction between finite-state and more complex phrase-structure grammars, such as context-free and context-sensitive grammars. However, the relevance of this distinction for the study of language as a neurobiological system has been questioned and it has been suggested that a more relevant and partly analogous distinction is that between non-adjacent and adjacent dependencies. Online memory resources are central to the processing of non-adjacent dependencies as information has to be maintained across intervening material. One proposal is that an external memory device in the form of a limited push-down stack is used to process non-adjacent dependencies. We tested this hypothesis in an artificial grammar learning paradigm where subjects acquired non-adjacent dependencies implicitly. Generally, we found no qualitative differences between the acquisition of non-adjacent dependencies and adjacent dependencies. This suggests that although the acquisition of non-adjacent dependencies requires more exposure to the acquisition material, it utilizes the same mechanisms used for acquiring adjacent dependencies. We challenge the push-down stack model further by testing its processing predictions for nested and crossed multiple non-adjacent dependencies. The push-down stack model is partly supported by the results, and we suggest that stack-like properties are some among many natural properties characterizing the underlying neurophysiological mechanisms that implement the online memory resources used in language and structured sequence processing.
近年来,在对语言进行实证脑研究中出现了一种假说,即动物和人类交流系统的根本区别可以用有限状态和更复杂的短语结构语法(如无约束和上下文敏感语法)之间的区别来捕捉。然而,对于将语言作为神经生物学系统进行研究而言,这种区别的相关性一直受到质疑,并且有人提出,更相关且部分类似的区别是在非相邻和相邻依赖性之间的区别。在线记忆资源对于处理非相邻依赖性至关重要,因为信息必须在中间材料之间保持。一种建议是使用有限的下推堆栈形式的外部存储设备来处理非相邻依赖性。我们在一个人工语法学习范式中检验了这一假说,其中被试是在隐含的情况下习得非相邻依赖性的。通常,我们在非相邻依赖性和相邻依赖性的习得之间没有发现定性差异。这表明,尽管非相邻依赖性的习得需要更多地接触习得材料,但它利用了用于习得相邻依赖性的相同机制。我们通过测试嵌套和交叉多个非相邻依赖性的处理预测,进一步对下推堆栈模型提出了挑战。结果在一定程度上支持了下推堆栈模型,并且我们提出,堆栈样性质是许多自然性质之一,这些性质描述了实现语言和结构化序列处理中使用的在线记忆资源的潜在神经生理机制。