Suppr超能文献

顶内沟中视觉和听觉目标的以眼为中心、以头为中心及复杂编码

Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus.

作者信息

Mullette-Gillman O'dhaniel A, Cohen Yale E, Groh Jennifer M

机构信息

Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.

出版信息

J Neurophysiol. 2005 Oct;94(4):2331-52. doi: 10.1152/jn.00021.2005. Epub 2005 Apr 20.

Abstract

The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with approximately 10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.

摘要

视觉和听觉事件的整合被认为需要在一个共同的参考框架中对视觉和听觉空间进行联合表征。我们研究了外侧和内侧顶内区(LIP、MIP)中视觉和听觉空间的编码,将其作为这种表征的一个候选区域。我们记录了两只猴子在LIP和MIP中的275个神经元的活动,同时它们从三个不同的眼位向一排视觉和听觉目标进行扫视。我们发现这些神经元中有45%受到视觉目标位置的调制,19%受到听觉目标的调制,9%受到视觉和听觉目标两者的调制。视觉和听觉感受野的参考框架沿着以眼为中心和以头为中心的参考框架之间的连续体分布,大约10%的听觉神经元和33%的视觉神经元的感受野与以眼为中心的参考框架比以头为中心的参考框架更一致,23%和18%的神经元的感受野与以头为中心的参考框架比以眼为中心的参考框架更一致,这使得很大一部分视觉和听觉反应模式与以头为中心和以眼为中心的参考框架都不一致。这些结果与我们之前在下丘和核心听觉皮层中发现的听觉刺激的参考框架相似。单个神经元的视觉和听觉感受野之间的对应关系较弱。然而,视觉和听觉反应的相关性足够好,以至于构建的一个简单单层网络,根据我们样本中神经元的活动来计算目标位置,即使权重仅基于视觉反应,对听觉目标也能成功执行。我们将这些结果解释为表明,尽管LIP和MIP区域中的空间表征在参考框架的传统概念框架内不容易描述,但它们仍然以类似的方式处理视觉和听觉空间信息。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验