Butz Martin V, Belardinelli Anna, Ehrenfeld Stephan
Department of Computer Science, University of Tübingen, Tübingen, Germany.
Cogn Process. 2012 Aug;13 Suppl 1:S113-6. doi: 10.1007/s10339-012-0471-y.
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top-down biases on visual processing. Equally, such information may influence top-down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.
大脑常常以一种接近贝叶斯原理最优解的方式整合多感官信息源。由于感觉模态基于不同的、与身体相关的参照系,多感官整合需要对信息进行精确转换。例如,我们通过实验表明,右手掌上的旋转触觉刺激可以影响对模糊旋转视觉显示的判断。最显著的是,这种影响取决于手掌的方向:当手掌向上时,手掌顺时针旋转会产生顺时针视觉判断偏差;当手掌向下时,同样的顺时针旋转会产生逆时针偏差。因此,触觉旋转线索在以头部为中心的参照系中偏向视觉旋转判断。最近,我们构建了一个模块化的多模态手臂模型,能够模拟此类实验的某些方面。该模型以多种模态共同表征手臂的状态,包括本体感觉的关节角度模态以及以头部为中心的方向和位置模态。每种模态分别表征每个肢体或关节。来自不同模态的感觉信息通过局部正向和逆向运动学映射进行交换。此外,通过卡尔曼滤波预测并整合再传入的感觉反馈。跨模态信息通过基于贝叶斯的似真性估计进行概率整合,持续维持一致的全局手臂状态估计。因此,这种架构能够模拟所描述的姿势依赖运动线索整合效应:触觉和本体感觉信息可能对视觉处理产生自上而下的偏差。同样,此类信息可能影响自上而下的视觉注意力,预期特定的手臂依赖运动模式。当前的研究在视觉处理和注意力方面实现了此类效应。