Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
Cogn Sci. 2013 May-Jun;37(4):731-56. doi: 10.1111/cogs.12029. Epub 2013 Mar 14.
Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after fingerprint examinations. The gaze data provide temporal sequences that the machine translation algorithm uses to estimate the subjects' assumptions of corresponding regions. Our results show that experts and novices have similar surface behavior, such as the number of fixations made or the duration of fixations. However, the approach applied to data from experts is able to identify more corresponding areas between two prints. The fixations that are associated with clusters that map with high probability to corresponding locations on the other print are likely to have greater utility in a visual matching task. These techniques address a fundamental problem in eye tracking research with perceptual matching tasks: Given that the eyes always point somewhere, which fixations are the most informative and therefore are likely to be relevant for the comparison task?
感知任务,如物体匹配、乳房 X 光照片判读、心理旋转和卫星图像变化检测,通常需要将对应关系分配给融合跨视图信息。我们将机器翻译技术应用于从类似于指纹检查的复杂感知匹配任务中记录的注视数据。注视数据提供了时间序列,机器翻译算法使用这些时间序列来估计受试者对应区域的假设。我们的结果表明,专家和新手的表面行为相似,例如注视的次数或注视的持续时间。然而,应用于专家数据的方法能够在两个指纹之间识别更多的对应区域。与映射到另一指纹上对应位置的高概率集群相关联的注视点,在视觉匹配任务中可能具有更大的效用。这些技术解决了在具有感知匹配任务的眼动追踪研究中的一个基本问题:给定眼睛总是指向某个地方,那么哪些注视点是最具信息量的,因此可能与比较任务相关?