Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, I-38122 Trento, Italy, and
Crossmodal Perception and Plasticity Laboratory, Institute for Research in Psychology, Institute of Neuroscience, Université catholique de Louvain, 1348 Louvain, France.
J Neurosci. 2019 Mar 20;39(12):2208-2220. doi: 10.1523/JNEUROSCI.2289-18.2018. Epub 2019 Jan 16.
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location. Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
计算声音位置和方向的能力是有效与动态环境交互的关键感知技能。然而,人类大脑如何实现空间听觉还知之甚少。在我们的研究中,我们使用 fMRI 来描述男性和女性在听左、右、上、下移动的声音以及静态声音时的大脑活动。对比位置变化的移动和静态声音的全脑单变量结果显示,双侧人类颞上回(hPT)对听觉运动有强烈的功能偏好。使用独立定位的 hPT,我们表明该区域包含有关听觉运动方向的信息,并且在较小程度上包含声源位置的信息。此外,hPT 显示出类似于视觉中颞中回(hMT+/V5)的运动组织轴。重要的是,尽管运动方向和位置依赖于 hPT 中部分共享的模式几何形状,如成功的跨条件解码所示,但静态和移动声音引起的反应却明显不同。总之,我们的结果表明 hPT 对听觉运动和位置进行编码,但与支持声源位置相关的运动处理的基础神经计算更可靠且部分不同。与我们对视觉运动的了解相比,关于大脑如何实现空间听觉的了解甚少。我们的研究表明,运动方向和声源位置可以在人类颞上回(hPT)中可靠地解码,并且它们依赖于部分共享的模式几何形状。因此,我们的研究通过表明这两种计算依赖于部分共享的神经编码,揭示了听觉皮层中如何实施声音位置或方向计算的重要新信息。此外,我们的结果表明,hPT 中移动声音的神经表示遵循“运动的首选轴”组织,类似于用于计算视觉运动的枕叶中颞上回(hMT+/V5)区域的编码机制。