Westermann Gert, Reck Miranda Eduardo
Centre for Brain and Cognitive Development, School of Psychology, Birkbeck College, University of London, Malet Street, London WC1E 7HX, UK.
Brain Lang. 2004 May;89(2):393-400. doi: 10.1016/S0093-934X(03)00345-6.
We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from the language. The model develops motor mirror neurons that are active when an external sound is perceived. An extension to visual mirror neurons for oral gestures is suggested.
我们提出了一个计算模型,该模型在咿呀学语阶段学习发声过程中运动参数与其感觉后果之间的耦合。基于这种耦合,优选的运动参数和典型感知的声音同时发展。接触周围语言会改变感知,使其与该语言的声音相匹配。该模型发展出在感知外部声音时活跃的运动镜像神经元。有人提出了对用于口腔手势的视觉镜像神经元的扩展。