视觉搜索中学习的空间限制:情境线索化建模

Spatial constraints on learning in visual search: modeling contextual cuing.

作者信息

Brady Timothy F, Chun Marvin M

机构信息

Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.

出版信息

J Exp Psychol Hum Percept Perform. 2007 Aug;33(4):798-815. doi: 10.1037/0096-1523.33.4.798.

Abstract

Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.

摘要

预测性视觉情境有助于视觉搜索,这一益处被称为情境线索化(M. M. 春 & Y. 江,1998)。在最初的任务中,搜索阵列在各个组块中重复出现,使得显示中所有干扰项的空间配置(情境)能够预测一个嵌入的目标位置。作者使用联结主义架构对现有结果进行建模,然后设计新的行为实验来检验该模型的假设。建模和行为结果表明,即使整个配置能够预测目标位置,学习可能仍局限于局部情境。局部学习限制了情境线索化产生的引导量。建模和新数据还表明,局部学习要求局部情境在整体全局情境中保持其位置。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索