Henderson J M, Siefert A B
Department of Psychology, Michigan State University, East Lansing 48824-1117, USA.
Psychon Bull Rev. 2001 Dec;8(4):753-60. doi: 10.3758/bf03196214.
What types of representations support our ability to integrate information acquired during one eye fixation with information acquired during the next fixation? In Experiment 1, transsaccadic integration was explored by manipulating whether or not the relative position of a picture of an object was maintained across a saccade. In Experiment 2, the degree to which visual details of a picture are coded in a position-specific representational system was explored by manipulating whether or not both the relative position and the left-right orientation of the picture were maintained across a saccade. Position-specific and nonspecific preview benefits were observed in both experiments. Only the position-specific benefits were influenced by the number of task-relevant pictures presented in the preview display (Experiment 1) and the left-right orientation of the picture presented in the preview display (Experiment 2). The results support a model of transsaccadic integration based on two independent representational systems. One system codes abstract, prestored object types, and the other codes episodic tokens consisting of stimulus properties linked to scene- or configuration-based position markers.
哪些类型的表征有助于我们将一次眼动注视期间获取的信息与下一次注视期间获取的信息进行整合?在实验1中,通过操纵物体图片的相对位置在扫视过程中是否保持,来探究跨扫视整合。在实验2中,通过操纵图片的相对位置和左右方向在扫视过程中是否都保持,来探究图片的视觉细节在特定位置表征系统中的编码程度。在两个实验中均观察到了特定位置和非特定位置的预览益处。只有特定位置的益处受到预览显示中呈现的与任务相关图片的数量(实验1)以及预览显示中呈现的图片的左右方向(实验2)的影响。结果支持了一种基于两个独立表征系统的跨扫视整合模型。一个系统编码抽象的、预先存储的物体类型,另一个系统编码由与基于场景或配置的位置标记相关联的刺激属性组成的情景令牌。