School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK.
School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK.
Cognition. 2020 Jun;199:104241. doi: 10.1016/j.cognition.2020.104241. Epub 2020 Feb 24.
Other peoples' (imagined) visual perspectives are represented perceptually in a similar way to our own, and can drive bottom-up processes in the same way as own perceptual input (Ward, Ganis, & Bach, 2019). Here we test directly whether visual perspective taking is driven by where another person is looking, or whether these perceptual simulations represent their position in space more generally. Across two experiments, we asked participants to identify whether alphanumeric characters, presented at one of eight possible orientations away from upright, were presented normally, or in their mirror-inverted form (e.g. "R" vs. "Я"). In some scenes, a person would appear sitting to the left or the right of the participant. We manipulated either between-trials (Experiment 1) or between-subjects (Experiment 2), the gaze-direction of the inserted person, such that they either (1) looked towards the to-be-judged item, (2) averted their gaze away from the participant, or (3) gazed out towards the participant (Exp. 2 only). In the absence of another person, we replicated the well-established mental rotation effect, where recognition of items becomes slower the more items are oriented away from upright (e.g. Shepard and Meltzer, 1971). Crucially, in both experiments and in all conditions, this response pattern changed when another person was inserted into the scene. People spontaneously took the perspective of the other person and made faster judgements about the presented items in their presence if the characters were oriented towards upright to them. The gaze direction of this other person did not influence these effects. We propose that visual perspective taking is therefore a general spatial-navigational ability, allowing us to calculate more easily how a scene would (in principle) look from another position in space, and that such calculations reflect the spatial location of another person, but not their gaze.
其他人(想象中的)视觉视角以与我们自己相似的方式在知觉上得到表示,并且可以以与我们自己的感知输入相同的方式驱动自下而上的过程(Ward、Ganis 和 Bach,2019)。在这里,我们直接测试了是否通过另一个人正在看的地方来驱动视角,或者这些知觉模拟是否更普遍地代表他们在空间中的位置。在两项实验中,我们要求参与者识别呈现的八个可能朝向中的八个可能取向之一的字母数字字符是正常呈现还是以其镜像反转形式呈现(例如“R”与“Я”)。在某些场景中,一个人会出现在参与者的左侧或右侧。我们在两个实验中(实验 1)或在被试之间(实验 2)操纵了插入人物的注视方向,以便他们(1)注视要判断的项目,(2)将目光从参与者移开,或(3)凝视参与者(仅实验 2)。在没有其他人的情况下,我们复制了众所周知的心理旋转效应,即当物品的取向与垂直方向越偏离时,识别物品的速度就越慢(例如 Shepard 和 Meltzer,1971)。至关重要的是,在两个实验和所有条件下,当另一个人插入场景时,这种反应模式都会发生变化。如果字符对他们来说是直立的,则人们会自发地采用其他人的视角,并在他们在场时更快地对呈现的项目做出判断。该其他人的注视方向不会影响这些效果。我们提出,视觉视角因此是一种通用的空间导航能力,使我们能够更容易地计算出从空间中的另一个位置来看一个场景会(原则上)是什么样子,并且这种计算反映了另一个人的空间位置,但不是他们的注视方向。