Cohen Michael A, Alvarez George A, Nakayama Ken, Konkle Talia
McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts; and
Department of Psychology, Harvard University, Cambridge, Massachusetts.
J Neurophysiol. 2017 Jan 1;117(1):388-402. doi: 10.1152/jn.00569.2016. Epub 2016 Nov 2.
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system.
NEW & NOTEWORTHY: Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.
视觉搜索是一种普遍存在的视觉行为,高效的搜索对生存至关重要。不同的认知模型基于注意力的动态变化或项目表征的相似性来解释搜索的速度和准确性。在这里,我们研究了在不考虑注意力动态变化的情况下,视觉搜索任务的表现能在多大程度上从视觉系统稳定的表征结构中预测出来。参与者在28种条件下执行视觉搜索任务,这些条件反映了不同的类别对(例如,在汽车中搜索人脸,在锤子中搜索身体等)。参与者找到目标项目所需的时间因类别组合而异。在另一组参与者中,我们测量了单独呈现项目时对这些物体类别的神经反应。然后,我们使用表征相似性分析来检查视觉系统不同细分区域的神经反应相似性是否具有预测视觉搜索表现所需的结构。总体而言,我们发现大多数高级视觉系统中都存在很强的脑/行为相关性,在考虑宏观尺度区域以及较小的中观尺度区域时,包括腹侧和背侧通路。这些结果表明,视觉系统稳定的、与任务无关的结构能够很好地预测对现实世界物体类别的视觉搜索。
在这里,我们询问哪些神经区域具有与视觉处理任务中的行为表现相关的神经反应模式。我们发现所有高级视觉皮层的表征结构都具有预测行为所需的结构。此外,当直接比较不同的神经区域时,我们发现它们都具有高度相似的类别水平表征结构。这些结果表明,高级视觉皮层中存在一种普遍且统一的表征结构,是视觉物体处理的基础。