Dominey Peter Ford, Inui Toshio, Hoen Michel
INSERM U846 Stem Cell and Brain Research Institute, 18 ave. Doyen Lepine, 69675 Bron Cedex, France.
Brain Lang. 2009 May-Jun;109(2-3):80-92. doi: 10.1016/j.bandl.2008.08.002. Epub 2008 Oct 5.
A central issue in cognitive neuroscience today concerns how distributed neural networks in the brain that are used in language learning and processing can be involved in non-linguistic cognitive sequence learning. This issue is informed by a wealth of functional neurophysiology studies of sentence comprehension, along with a number of recent studies that examined the brain processes involved in learning non-linguistic sequences, or artificial grammar learning (AGL). The current research attempts to reconcile these data with several current neurophysiologically based models of sentence processing, through the specification of a neural network model whose architecture is constrained by the known cortico-striato-thalamo-cortical (CSTC) neuroanatomy of the human language system. The challenge is to develop simulation models that take into account constraints both from neuranatomical connectivity, and from functional imaging data, and that can actually learn and perform the same kind of language and artificial syntax tasks. In our proposed model, structural cues encoded in a recurrent cortical network in BA47 activate a CSTC circuit to modulate the flow of lexical semantic information from BA45 to an integrated representation of meaning at the sentence level in BA44/6. During language acquisition, corticostriatal plasticity is employed to allow closed class structure to drive thematic role assignment. From the AGL perspective, repetitive internal structure in the AGL strings is encoded in BA47, and activates the CSTC circuit to predict the next element in the sequence. Simulation results from Caplan's [Caplan, D., Baker, C., & Dehaut, F. (1985). Syntactic determinants of sentence comprehension in aphasia. Cognition, 21, 117-175] test of syntactic comprehension, and from Gomez and Schvaneveldts' [Gomez, R. L., & Schvaneveldt, R. W. (1994). What is learned from artificial grammars?. Transfer tests of simple association. Journal of Experimental Psychology: Learning, Memory and Cognition, 20, 396-410] artificial grammar learning experiments are presented. These results are discussed in the context of a brain architecture for learning grammatical structure for multiple natural languages, and non-linguistic sequences.
当今认知神经科学的一个核心问题是,大脑中用于语言学习和处理的分布式神经网络如何参与非语言认知序列学习。大量关于句子理解的功能性神经生理学研究,以及一些最近研究非语言序列学习或人工语法学习(AGL)所涉及的大脑过程的研究,为这个问题提供了依据。当前的研究试图通过指定一个神经网络模型,将这些数据与当前几个基于神经生理学的句子处理模型相协调,该模型的架构受人类语言系统已知的皮质-纹状体-丘脑-皮质(CSTC)神经解剖学的约束。挑战在于开发模拟模型,既要考虑神经解剖学连接的限制,也要考虑功能成像数据的限制,并且能够实际学习和执行相同类型的语言和人工句法任务。在我们提出的模型中,BA47中循环皮质网络编码的结构线索激活一个CSTC回路,以调节从BA45到BA44/6中句子层面意义的综合表征的词汇语义信息流。在语言习得过程中,皮质纹状体可塑性被用来允许封闭类结构驱动主题角色分配。从AGL的角度来看,AGL字符串中的重复内部结构在BA47中编码,并激活CSTC回路以预测序列中的下一个元素。展示了来自卡普兰[卡普兰,D.,贝克,C.,& 德豪特,F.(1985年)。失语症患者句子理解的句法决定因素。认知,21,117 - 175]句法理解测试,以及戈麦斯和施瓦内费尔德茨[戈麦斯,R. L.,& 施瓦内费尔德茨,R. W.(1994年)。从人工语法中学到了什么?简单联想的迁移测试。实验心理学杂志:学习、记忆与认知,20,396 - 410]人工语法学习实验的模拟结果。这些结果在一个用于学习多种自然语言语法结构和非语言序列的大脑架构背景下进行了讨论。