Zhang Huihui, Zhou Xiaolin
Center for Brain and Cognitive Sciences, Peking University, Beijing, China.
School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
J Neurophysiol. 2017 Aug 1;118(2):1244-1256. doi: 10.1152/jn.01061.2015. Epub 2017 Jun 14.
Human timing behaviors are consistent with Bayesian inference, according to which both previous knowledge (prior) and current sensory information determine final responses. However, it is unclear whether the brain represents temporal priors exclusively for individual modalities or in a supramodal manner when temporal information comes from different modalities at different times. Here we asked participants to reproduce time intervals in either a unisensory or a multisensory context. In unisensory tasks, sample intervals drawn from a uniform distribution were presented in a single visual or auditory modality. In multisensory tasks, sample intervals from the two modalities were randomly mixed; visual and auditory intervals were drawn from two adjacent uniform distributions, with the conjunction of the two being equal to the distribution in the unisensory tasks. In the unisensory tasks, participants' reproduced times exhibited classic central-tendency biases: shorter intervals were overestimated and longer intervals were underestimated. In the multisensory tasks, reproduced times were biased toward the mean of the whole distribution rather than the means of intervals in individual modalities. The Bayesian model with a supramodal prior (distribution of time intervals from both modalities) outperformed the model with modality-specific priors in describing participants' performance. With a generalized model assuming the weighted combination of unimodal priors, we further obtained the relative contribution of visual intervals and auditory intervals in forming the prior for each participant. These findings suggest a supramodal mechanism for encoding priors in temporal processing, although the extent of influence of one modality on another differs individually. Visual timing and auditory timing influence each other when time intervals in the two modalities are drawn from two adjacent distributions and are randomly intermixed. A Bayesian model with a supramodal prior (distribution of intervals from both modalities) outperforms the model using sensory-specific priors in describing participants' performance. A generalized model further reveals that the prior is represented as a weighted average of the distribution of time intervals from the two modalities, which differ individually.
人类的时间行为与贝叶斯推理一致,根据贝叶斯推理,先前的知识(先验)和当前的感官信息共同决定最终反应。然而,当不同时间的时间信息来自不同模态时,大脑是仅以个体模态的方式还是以超模态的方式表征时间先验尚不清楚。在这里,我们要求参与者在单感官或多感官环境中重现时间间隔。在单感官任务中,从均匀分布中抽取的样本间隔以单一视觉或听觉模态呈现。在多感官任务中,来自两种模态的样本间隔被随机混合;视觉和听觉间隔分别从两个相邻的均匀分布中抽取,两者的结合等同于单感官任务中的分布。在单感官任务中,参与者重现的时间表现出经典的集中趋势偏差:较短的间隔被高估,较长的间隔被低估。在多感官任务中,重现的时间偏向于整个分布的均值,而不是各个模态中间隔的均值。具有超模态先验(来自两种模态的时间间隔分布)的贝叶斯模型在描述参与者的表现方面优于具有模态特定先验的模型。通过一个假设单峰先验加权组合的广义模型,我们进一步获得了每个参与者在形成先验时视觉间隔和听觉间隔的相对贡献。这些发现表明在时间处理中存在一种超模态机制来编码先验,尽管一种模态对另一种模态的影响程度因人而异。当两种模态中的时间间隔从两个相邻分布中抽取并随机混合时,视觉时间和听觉时间相互影响。具有超模态先验(来自两种模态的间隔分布)的贝叶斯模型在描述参与者的表现方面优于使用感官特定先验的模型。一个广义模型进一步揭示,先验表现为来自两种模态的时间间隔分布的加权平均值,且个体之间存在差异。