Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, Box 1225, 751 42, Uppsala, Sweden.
Psychol Res. 2019 Sep;83(6):1269-1280. doi: 10.1007/s00426-017-0939-6. Epub 2017 Dec 21.
In everyday life, both the head and the hand movements of another person reveal the other's action target. However, studies on the development of action prediction have primarily included displays in which only hand and no head movements were visible. Given that infants acquire in their first year both the ability to follow other's gaze and the ability to predict other's reaching actions, the question is whether they rely mostly on the hand or the head when predicting other's manual actions. The current study aimed to provide an answer to this question using a screen-based eye tracking setup. Thirteen-month-old infants observed a model transporting plastic rings from one side of the screen to the other side and place them on a pole. In randomized trials the model's head was either visible or occluded. The dependent variable was gaze-arrival time, which indicated whether participants predicted the model's action targets. Gaze-arrival times were not found to be different when the head was visible or rendered invisible. Furthermore, target looks that occurred after looks at the hand were found to be predictive, whereas target looks that occurred after looks at the head were reactive. In sum, the study shows that 13-month-olds are capable of predicting an individual's action target based on the observed hand movements but not the head movements. The data suggest that earlier findings on infants' action prediction in screen-based tasks in which often only the hands were visible may well generalize to real-life settings in which infants have visual access to the actor's head.
在日常生活中,他人的头和手的动作都能揭示其动作的目标。然而,关于动作预测发展的研究主要包括这样一些展示:只可见到手的动作,而不可见到头的动作。鉴于婴儿在第一年就同时获得了跟随他人目光和预测他人伸手动作的能力,问题是他们在预测他人的手部动作时,主要依赖于手还是头。本研究旨在使用基于屏幕的眼动追踪设置来回答这个问题。13 个月大的婴儿观察到一个模型将塑料环从屏幕的一侧运送到另一侧,并将它们放在一根杆子上。在随机试验中,模型的头要么可见,要么被遮挡。当头部可见或不可见时,目光到达时间(表明参与者是否预测了模型的动作目标)没有差异。此外,发现目光先看向手后再看向目标是具有预测性的,而目光先看向头后再看向目标是反应性的。总之,该研究表明,13 个月大的婴儿能够根据观察到手的动作来预测个体的动作目标,但不能根据观察到头的动作来预测。这些数据表明,在基于屏幕的任务中,婴儿的动作预测的早期发现(通常只有手可见)很可能推广到现实生活中的场景,在这些场景中,婴儿可以看到演员的头部。