Cuppini Cristiano, Magosso Elisa, Ursino Mauro
Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy.
Front Psychol. 2011 May 2;2:77. doi: 10.3389/fpsyg.2011.00077. eCollection 2011.
In this paper, we present two neural network models - devoted to two specific and widely investigated aspects of multisensory integration - in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual-auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability - not present at birth - develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space - where multimodal integration occurs - may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
在本文中,我们提出了两种神经网络模型——致力于多感官整合的两个特定且被广泛研究的方面——以便证明计算模型在深入了解大脑中多感官整合的组织、发育和可塑性背后的神经机制方面的潜力。第一个模型考虑了中脑结构上丘(SC)中的视觉 - 听觉相互作用。该模型能够再现并解释上丘神经元中多感官整合的主要生理特征,并描述上丘的整合能力——出生时不存在——如何在出生后的生活中根据跨模态刺激的感官体验逐渐发展。第二个模型解决了身体某部位的触觉刺激与同一身体部位附近的视觉(或听觉)刺激如何在多模态顶叶神经元中整合以形成对个人周边(即附近)空间的感知这一问题。该模型研究了多模态整合发生的个人周边空间的范围如何因诸如使用工具与远处空间进行交互等体验而被改变。建模方法的实用性体现在几个方面:(i)这两个模型虽然致力于不同的问题并模拟不同的脑区,但共享一些可能更普遍地支配大脑中感官融合以及多感官整合的学习和可塑性的共同机制(侧向抑制和兴奋、非线性神经元特性、递归连接、竞争、增强和抑制的赫布规则)。(ii)这些模型有助于根据神经活动和突触连接来解释行为和心理物理反应。(iii)这些模型可以做出可测试的预测,有助于指导未来的实验,以便验证、拒绝或修改主要假设。