Center for Language Sciences, University of Rochester, Rochester, NY 14627, USA.
Cogn Sci. 2012 Nov-Dec;36(8):1468-98. doi: 10.1111/j.1551-6709.2012.01264.x. Epub 2012 Sep 10.
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners' inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization-Greenberg's Universal 18-which bans a particular word-order pattern relating nouns, adjectives, and numerals.
在本文中,我们开发了一种层次贝叶斯模型,用于学习一种通用类型的人工语言学习实验,在这种实验中,学习者接触到代表真实学习者输入中存在的变化的语法混合物,特别是在语言变化时期。建模目标是形式化和量化假设的学习偏差。测试案例是一项实验(Culbertson、Smolensky 和 Legendre,2012),旨在学习名词领域的词序模式。该模型确定了实验参与者的内部偏差,为学习者对他们所学语法强加(可能是任意的)属性提供了证据,这可能导致了被称为类型学普遍性的跨语言规律。接触到人工语法混合物的学习者往往会以某些方式而不是其他方式改变这些混合物;该模型揭示了学习者的推理如何受到特定先验偏差的系统影响。这些偏差与一种类型学概括——格林伯格的普遍 18 一致,该概括禁止了一种与名词、形容词和数字有关的特殊词序模式。