Suppr超能文献

整合眼动过程中的图像信息。

Integrating pictorial information across eye movements.

作者信息

Pollatsek A, Rayner K, Collins W E

出版信息

J Exp Psychol Gen. 1984 Sep;113(3):426-42. doi: 10.1037//0096-3445.113.3.426.

Abstract

Six experiments are reported dealing with the types of information integrated across eye movements in picture perception. A line drawing of an object was presented in peripheral vision, and subjects made an eye movement to it. During the saccade, the initially presented picture was replaced by another picture that the subject was instructed to name as quickly as possible. The relation between the stimulus on the first fixation and the stimulus on the second fixation was varied. Across the six experiments, there was about 100-130 ms facilitation when the pictures were identical compared with a control condition in which only the target location was specified on the first fixation. This finding clearly implies that information about the first picture facilitated naming the second picture. Changing the size of the picture from one fixation to the next had little effect on naming time. This result is consistent with work on reading and low-level visual processes in indicating that pictorial information is not integrated in a point-by-point manner in an integrated visual buffer. Moreover, only about 50 ms of the facilitation for identical pictures could be attributed to the pictures having the same name. When the pictures represented the same concept (e.g., two different pictures of a horse), there was a 90-ms facilitation effect that could have been the result of either the visual or conceptual similarity of the pictures. However, when the pictures had different names, only visual similarity produced facilitation. Moreover, when the pictures had different names, there appeared to be inhibition from the competing names. The results of all six experiments are consistent with a model in which the activation of both the visual features and the name of the picture seen on the first fixation survive the saccade and combine with the information extracted on the second fixation to produce identification and naming of the second picture.

摘要

本文报告了六项实验,这些实验探讨了在图片感知过程中,跨眼动整合的信息类型。一幅物体的线条图呈现在周边视觉中,受试者向其进行眼动。在扫视过程中,最初呈现的图片被另一张图片取代,受试者被要求尽快说出这张图片的名称。第一次注视时的刺激与第二次注视时的刺激之间的关系有所变化。在这六项实验中,与仅在第一次注视时指定目标位置的控制条件相比,当图片相同时,存在约100 - 130毫秒的促进效应。这一发现清楚地表明,关于第一张图片的信息有助于对第二张图片进行命名。从一次注视到下一次注视改变图片大小对命名时间影响不大。这一结果与阅读和低层次视觉过程的研究结果一致,表明图像信息并非在整合视觉缓冲器中以逐点方式进行整合。此外,相同图片的促进效应中只有约50毫秒可归因于图片具有相同名称。当图片代表相同概念时(例如,两张不同的马的图片),存在90毫秒的促进效应,这可能是图片视觉或概念相似性的结果。然而,当图片有不同名称时,只有视觉相似性产生促进作用。此外,当图片有不同名称时,似乎存在来自竞争名称的抑制作用。所有六项实验的结果都与一个模型一致,在该模型中,第一次注视时看到的图片的视觉特征和名称的激活在扫视过程中得以保留,并与第二次注视时提取的信息相结合,以实现对第二张图片的识别和命名。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验