Linguistics Program, Department of English, George Mason University, Fairfax, VA 22030, USA.
Top Cogn Sci. 2013 Jul;5(3):392-424. doi: 10.1111/tops.12027. Epub 2013 May 23.
According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology-the distribution of linguistic patterns across the world's languages-and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of results of artificial grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals.
根据经典观点,语言学习受到认知偏差的促进和限制。这些偏差反映在语言类型学中——世界语言中语言模式的分布——并且可以通过对儿童和成人学习者进行人工语法实验来探究。本研究从一种广泛成功的类型学方法(最优理论)出发,并从计算方法到统计学习的技术中进行了调整,我们开发了一个认知偏差的贝叶斯模型,并表明它解释了人工语法实验中名词短语词序的详细结果模式(Culbertson、Smolensky 和 Legendre,2012)。我们的建议具有几个新颖的特点,使其与语言理论、计算认知科学和机器学习领域的先前工作区分开来。这项研究说明了如何将这些领域的思想综合到语言学习模型中,其中偏差的强度从硬(绝对)到软(统计)不等,并且语言特定和领域通用的偏差相结合,可以解释从类型分布的宏观尺度到个体学习的微观尺度的数据。