Mikula Laura, Gaveau Valérie, Pisella Laure, Khan Aarlenne Z, Blohm Gunnar
Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France.
School of Optometry, University of Montreal , Montreal, Quebec , Canada.
J Neurophysiol. 2018 May 1;119(5):1981-1992. doi: 10.1152/jn.00338.2017. Epub 2018 Feb 21.
When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.
在伸手去够一个物体时,为了规划手臂的运动计划,需要有关目标位置以及初始手部位置的信息。如果有视觉信息,初始手部位置可以由本体感觉信息以及视觉信息来确定。贝叶斯最优整合假定我们会利用所有可用信息,对更可靠的感觉给予更大权重,因此通常视觉信息的权重比通常不太可靠的本体感觉信息更大。信息加权所依据的标准尚未得到明确研究;一直以来人们认为权重基于任务和效应器相关的感觉可靠性,这需要对变异性有明确的神经元表征。然而,权重也可能通过学习到的特定模态整合权重隐含地确定,而不是基于效应器相关的可靠性。虽然前一种假设预测左右手可感知的权重不同,例如,由于优势手与非优势手本体感觉的可靠性不同,但如果后一种假设成立,我们预计整合权重会相同。我们发现,无论在两个单独的互补任务中测量的两只手的感觉变异性存在差异,左右手的本体感觉权重都极其一致。因此,我们提出在伸手过程中,本体感觉权重是通过双手共同学习得到的,个体间范围较大,但与每只手的特定本体感觉变异性无关。新发现与值得注意之处关于手部的视觉和本体感觉信息如何整合以规划伸手动作仍存在争议。本研究的目的是阐明在多感官整合过程中分配给视觉和本体感觉的权重是如何确定的。我们发现有证据表明整合权重是特定模态的,而非基于效应器的感觉可靠性。