Cohen Clara, Higham Catherine F, Nabi Syed Waqar
English Language & Linguistics, University of Glasgow, Glasgow, United Kingdom.
School of Computing Science, University of Glasgow, Glasgow, United Kingdom.
Front Artif Intell. 2020 Jun 24;3:43. doi: 10.3389/frai.2020.00043. eCollection 2020.
Learning a second language (L2) usually progresses faster if a learner's L2 is similar to their first language (L1). Yet global similarity between languages is difficult to quantify, obscuring its precise effect on learnability. Further, the combinatorial explosion of possible L1 and L2 language pairs, combined with the difficulty of controlling for idiosyncratic differences across language pairs and language learners, limits the generalizability of the experimental approach. In this study, we present a different approach, employing artificial languages, and artificial learners. We built a set of five artificial languages whose underlying grammars and vocabulary were manipulated to ensure a known degree of similarity between each pair of languages. We next built a series of neural network models for each language, and sequentially trained them on pairs of languages. These models thus represented L1 speakers learning L2s. By observing the change in activity of the cells between the L1-speaker model and the L2-learner model, we estimated how much change was needed for the model to learn the new language. We then compared the change for each L1/L2 bilingual model to the underlying similarity across each language pair. The results showed that this approach can not only recover the facilitative effect of similarity on L2 acquisition, but can also offer new insights into the differential effects across different domains of similarity. These findings serve as a proof of concept for a generalizable approach that can be applied to natural languages.
如果学习者的第二语言(L2)与他们的第一语言(L1)相似,那么学习第二语言通常会进展得更快。然而,语言之间的整体相似性很难量化,这使得其对可学习性的确切影响变得模糊不清。此外,可能的第一语言和第二语言对的组合爆炸式增长,再加上难以控制不同语言对和语言学习者之间的特殊差异,限制了实验方法的可推广性。在本研究中,我们提出了一种不同的方法,采用人工语言和人工学习者。我们构建了一组五种人工语言,其底层语法和词汇经过操纵,以确保每对语言之间具有已知程度的相似性。接下来,我们为每种语言构建了一系列神经网络模型,并在语言对上依次对它们进行训练。这些模型因此代表了学习第二语言的第一语言使用者。通过观察第一语言使用者模型和第二语言学习者模型之间细胞活动的变化,我们估计模型学习新语言需要多少变化。然后,我们将每个第一语言/第二语言双语模型的变化与每个语言对之间的潜在相似性进行比较。结果表明,这种方法不仅可以恢复相似性对第二语言习得的促进作用,还可以为相似性不同领域的差异效应提供新的见解。这些发现为一种可应用于自然语言的可推广方法提供了概念验证。