CerCo, CNRS, 31055, Toulouse, France.
Laboratoire Cognition, Langues, Langage, Ergonomie, CNRS, Université Toulouse, Toulouse, France.
Sci Rep. 2020 Dec 17;10(1):22172. doi: 10.1038/s41598-020-79127-y.
In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can "learn" (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
近年来,人工神经网络在多个领域的表现已经接近甚至超过人类:在语言处理等以前是人类特权的任务中,最新模型的表现取得了显著的进步。这项技术进步的一个优势是促进了不同神经网络和人类性能之间的比较,以便加深我们对人类认知的理解。在这里,我们研究了哪种神经网络架构(前馈与递归)在人工语法学习中与人类行为相匹配,这是语言习得的一个关键方面。先前的实验研究证明,人类受试者只需少量接触,并且通常不需要明确了解潜在规则,就可以学习人工语法。我们在人类和前馈和递归网络中测试了具有不同复杂程度的四个语法。我们的结果表明,两种架构都可以通过错误反向传播来学习语法,其学习次数与人类相同,但递归网络的表现比前馈网络更接近人类,与语法复杂程度无关。此外,与视觉处理类似,其中前馈和递归架构与无意识和有意识的过程有关,在十个正则语法上的架构性能差异表明,更简单和更明确的语法更适合递归架构学习,这支持了这样的假设,即明确学习最好由递归网络建模,而前馈网络则可以捕捉到隐含学习中涉及的动态。