Department of Neurology and
Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104.
J Neurosci. 2018 May 23;38(21):4996-5007. doi: 10.1523/JNEUROSCI.3250-17.2018. Epub 2018 May 2.
Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison.
现代空间导航需要灵活运用多种表示格式,包括视觉场景、符号和文字。这些格式传达不同的信息。视觉场景丰富而具体,但包含多余的细节。箭头是符号的一种示例,是一种省略了多余细节但保留了模拟空间属性的示意表示。文字则完全消除了空间信息,以纯粹抽象的形式传达空间方向。人类大脑如何在这些格式内和跨格式计算空间方向?为了研究这个问题,我们对男性和女性进行了两项实验:一项是预先注册的行为研究,另一项是使用 fMRI 数据的多体素模式分析进行的神经影像学研究,以揭示表示格式之间的异同。在行为研究中,参与者观看以图像、示意图或文字呈现的空间方向(例如“左”),并对每个试验进行响应,指示空间方向与之前观看的方向是否相同或不同。尽管刺激的视觉复杂度相匹配,但他们对示意图和文字的反应速度比图像快。在 fMRI 研究中,参与者执行相同的任务,但仅对偶尔的捕获试验做出响应。图像中的空间方向在双侧顶内沟中可解码,但在示意图和文字中不可解码。空间方向也可以在所有三种格式之间解码。这些结果表明,顶内沟在计算视觉场景中的空间方向时起作用,但当空间方向以示意图或文字呈现时,该神经回路可能会被绕过。导航员在各种格式中遇到空间方向:文字(“向左转”)、示意符号(指示向左转的箭头)和视觉场景(向左转的道路)。大脑必须将这些空间方向转换为行动方案。在这里,我们研究这些格式的神经表示之间的相似之处和差异。我们发现,双侧顶内沟在视觉场景和三种格式中都代表空间方向。我们还发现,参与者对示意图的反应最快,其次是文字,然后是图像,这表明抽象格式中的空间方向比具体格式更容易解释。这些结果支持一种空间方向解释模型,即空间方向要么是为现实世界的行动而计算的,要么是为了高效的视觉比较而计算的。