Odegaard Brian, Wozny David R, Shams Ladan
Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America.
Department of BioEngineering, University of California, Los Angeles, Los Angeles, California, United States of America.
PLoS Comput Biol. 2015 Dec 8;11(12):e1004649. doi: 10.1371/journal.pcbi.1004649. eCollection 2015 Dec.
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.
环境中物体和事件的定位对生存至关重要,因为许多感知和运动任务都依赖于空间位置的估计。因此,假设空间定位通常应该是准确的似乎是合理的。奇怪的是,一些先前的研究报告了视觉和听觉定位中的偏差,但这些研究使用的样本量较小且结果不一。因此,尚不清楚:(1)所报告的定位反应偏差是否真实(或由于异常值、抽样偏差或其他因素),以及(2)这些假定的偏差是反映了空间感官表征中的偏差还是先验期望(这可能是由于实验设置、说明或刺激分布)。在这里,为了解决这些问题,分析了一个前所未有的规模的数据集(从384名观察者获得),以检查感官偏差的存在、方向和大小,并使用定量计算模型来探究驱动这些效应的潜在机制。数据显示,平均而言,观察者在定位视觉刺激时偏向中心,而在定位听觉刺激时偏向周边。此外,使用贝叶斯因果推理框架的定量分析表明,虽然对中心位置预先存在的空间偏差有一定影响,但视觉和听觉空间的感官表征中的偏差对于充分解释行为数据是必要的。在单一事件同时产生听觉和视觉刺激的情况下,这些相反的视觉和听觉偏差是如何协调的?潜在地,一种模态中的偏差可能占主导,或者偏差可能相互作用/抵消。数据显示,当在这些条件下发生整合时,视觉偏差占主导,但与单感官条件相比,这种偏差的大小有所减小。因此,多感官整合不仅提高了感知估计的精度,还提高了准确性。