Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892.
Proc Natl Acad Sci U S A. 2014 Mar 11;111(10):E962-71. doi: 10.1073/pnas.1312567111. Epub 2014 Feb 24.
Perception reflects an integration of "bottom-up" (sensory-driven) and "top-down" (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer's goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations.
感知反映了“自下而上”(感觉驱动)和“自上而下”(内部生成)信号的整合。尽管视觉处理模型通常强调前馈分层处理的核心作用,但对于自上而下信号对复杂视觉表示的影响知之甚少。在这里,我们研究了观察者的目标是否以及如何调节整个大脑皮层的物体处理。我们检查了在六个不同任务下由一组不同的物体引起的反应,重点关注物理(例如颜色)或概念属性(例如人造)。关键是,所有任务都呈现相同的刺激,使我们能够研究任务如何影响相同视觉输入的神经表示。我们发现任务对整个大脑皮层的物体处理有广泛而不同的影响。首先,我们在腹侧颞叶和前额叶皮层中发现了任务依赖的表示。具体来说,尽管可以从任务内的多体素反应中解码物体身份,但在跨任务中解码的能力显著降低。相比之下,早期视觉皮层在任务内和跨任务中都具有等效的解码能力,表明存在任务独立的表示。其次,任务信息是普遍存在的,并且存在于物体处理的早期阶段。然而,尽管腹侧颞叶、前额叶和顶叶皮层的反应能够解码任务的类型(物理/概念)和特定任务(例如颜色),但早期视觉皮层对任务类型不敏感,只能用于解码单个物理任务。因此,物体处理受到观察者行为目标的高度影响,突出了自上而下的信号如何约束和告知视觉表示的形成。