SDU Biorobotics, Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark.
Front Neural Circuits. 2022 Aug 8;16:921453. doi: 10.3389/fncir.2022.921453. eCollection 2022.
The brain forms unified, coherent, and accurate percepts of events occurring in the environment by integrating information from multiple senses through the process of multisensory integration. The neural mechanisms underlying this process, its development and its maturation in a multisensory environment are yet to be properly understood. Numerous psychophysical studies suggest that the multisensory cue integration process follows the principle of Bayesian estimation, where the contributions of individual sensory modalities are proportional to the relative reliabilities of the different sensory stimuli. In this article I hypothesize that experience dependent crossmodal synaptic plasticity may be a plausible mechanism underlying development of multisensory cue integration. I test this hypothesis a computational model that implements Bayesian multisensory cue integration using reliability-based cue weighting. The model uses crossmodal synaptic plasticity to capture stimulus statistics within synaptic weights that are adapted to reflect the relative reliabilities of the participating stimuli. The model is embodied in a simulated robotic agent that learns to localize an audio-visual target by integrating spatial location cues extracted from of auditory and visual sensory modalities. Results of multiple randomized target localization trials in simulation indicate that the model is able to learn modality-specific synaptic weights proportional to the relative reliabilities of the auditory and visual stimuli. The proposed model with learned synaptic weights is also compared with a maximum-likelihood estimation model for cue integration regression analysis. Results indicate that the proposed model reflects maximum-likelihood estimation.
大脑通过多感觉整合过程将来自多个感觉的信息整合在一起,从而形成对环境中发生的事件的统一、连贯和准确的感知。这一过程的神经机制、其在多感觉环境中的发展和成熟尚未得到充分理解。许多心理物理学研究表明,多感觉线索整合过程遵循贝叶斯估计原理,其中各个感觉模态的贡献与不同感觉刺激的相对可靠性成正比。在本文中,我假设经验依赖性的跨模态突触可塑性可能是多感觉线索整合发展的一种合理机制。我使用基于可靠性的线索加权来测试这个假设,提出了一个实现贝叶斯多感觉线索整合的计算模型。该模型使用跨模态突触可塑性来捕获适应于反映参与刺激的相对可靠性的突触权重内的刺激统计信息。该模型体现在一个模拟机器人代理中,该代理通过整合从听觉和视觉感觉模态中提取的空间位置线索来学习定位视听目标。模拟中的多个随机目标定位试验的结果表明,该模型能够学习与听觉和视觉刺激的相对可靠性成正比的特定于模态的突触权重。具有学习突触权重的所提出的模型也与用于线索整合的最大似然估计模型进行了比较,通过回归分析。结果表明,所提出的模型反映了最大似然估计。