Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.
Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.
J Neurophysiol. 2020 Sep 1;124(3):715-727. doi: 10.1152/jn.00046.2020. Epub 2020 Jul 29.
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys () and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli. We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
环境通过多种感觉进行采样,这些感觉交织在一起,产生统一的感知状态。然而,要最优地统一这些信号,就需要将特定的信号分配给相同或不同的潜在对象或事件。许多先前的研究(尤其是在动物中)假设跨模态信息的融合,而最近人类的研究已经开始探究这种假设的恰当性。在这里,我们呈现了一项新颖的行为任务的结果,在该任务中,猴子和人类都定位了视觉和听觉刺激,并通过眼跳报告了他们感知到的来源。当视觉和听觉刺激的位置相隔很远时,被试会进行两次眼跳,而当两个刺激出现在同一位置时,他们只会进行一次眼跳。在中间的分离水平上,产生了混合的反应模式:在一些试验中,对中间位置进行单次眼跳,而在其他试验中,对两个位置进行单独的眼跳。混合反应模式可以用一个分层因果推理模型来很好地描述,该模型准确地预测了“相同与不同”来源判断的显式判断,以及在这些条件下,对来源位置的定位偏差。这个任务的结果与人类在广泛的类似任务中的先前工作广泛一致,将多感官因果推理的研究扩展到非人类灵长类动物,并扩展到一个具有感知来源数量的类别分析和对刺激感知位置的连续报告的自然行为任务。我们开发了一种新的行为范式,用于研究人类和猴子的多感官因果推理,发现这两个物种都以相同的贝叶斯最优方式进行因果判断。据我们所知,这是动物行为因果推理的首次证明,这种跨物种比较为未来使用神经元记录技术的实验奠定了基础,这些技术在人类受试者中是不切实际或不可能的。