Morse Anthony F, Benitez Viridian L, Belpaeme Tony, Cangelosi Angelo, Smith Linda B
Cognition Institute, Center for Robotics and Neural Systems, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, United Kingdom.
Department of Psychology, University of Wisconsin-Madison, 1202 W. Johnson St., Madison, WI 53706, United States of America.
PLoS One. 2015 Mar 18;10(3):e0116012. doi: 10.1371/journal.pone.0116012. eCollection 2015.
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body - and its momentary posture - may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1-3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1-5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies' momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6-9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge -not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping - but through the body's momentary disposition in space.
对于婴儿来说,学习一个单词的首要问题是将单词与其所指对象建立映射;第二个问题是当再次遇到该单词和/或所指对象时记住这种映射。最近的婴儿研究表明,空间位置在婴儿解决这两个问题的过程中起着关键作用。在此,我们提供了一个新的理论模型以及新的实证证据,说明身体及其瞬间姿势可能是这些过程的核心。本研究采用了一个命名-对象映射任务,在该任务中,命名要么是在没有其目标对象的情况下出现(实验1-3、6和7),要么是在其目标对象出现但处于先前与干扰项相关联的位置时出现(实验4、5、8和9)。一个人形机器人模型(实验1-5)被用于实例化和检验这样一个假设,即以身体为中心的空间位置,以及身体的瞬间姿势,被用于集中绑定听到的名字和视觉对象的多模态特征。该机器人模型被证明能够复制现有的婴儿数据,然后产生新的预测,并在新的婴儿研究(实验6-9)中进行检验。尽管在这第二组实验中空间位置与任务无关,但婴儿利用以身体为中心的空间偶然性而非时间偶然性来将名字映射到对象上。婴儿和机器人即使在新的空间位置也能记住命名-对象映射。然而,机器人模型展示了这种记忆是如何产生的——并非如先前关于空间在命名-对象映射中作用的模型所提出的那样,通过将身体信息与单词-对象映射分离——而是通过身体在空间中的瞬间配置。