Suppr超能文献

推断感觉运动学习的视觉运动先验。

Inferring visuomotor priors for sensorimotor learning.

机构信息

Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom.

出版信息

PLoS Comput Biol. 2011 Mar;7(3):e1001112. doi: 10.1371/journal.pcbi.1001112. Epub 2011 Mar 31.

Abstract

Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations--the mapping between actual and visual location of the hand--during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.

摘要

感觉运动学习被证明既依赖于先前的期望,也依赖于与贝叶斯整合一致的感官证据。因此,在先验信念在学习过程中起着关键作用,尤其是当只有模棱两可的感官信息可用时。在这里,我们开发了一种新的技术来估计先验的协方差结构,即手的实际位置和视觉位置之间的视觉运动转换——在学习任务期间。在多个视觉运动转换中,受试者进行了伸手运动,在这些转换中,他们只能在运动结束时收到手位置的视觉反馈。在经历了一次特定的伸手运动后,受试者没有足够的信息来确定确切的转换,因此他们的第二次伸手运动反映了他们对视觉运动转换的先验和第一次伸手运动的感官证据的组合。我们开发了一个贝叶斯观测器模型,以便推断出受试者的先验的协方差结构,发现该模型对与视觉运动旋转一致的参数设置给予了很高的概率。因此,尽管经历的视觉运动转换集合几乎没有结构,但受试者强烈倾向于将模棱两可的感官证据解释为来自类似旋转的转换。然后,我们让同一批受试者接触一组高度结构化的视觉运动转换,这些转换旨在与视觉运动旋转的集合非常不同。在这个暴露过程中,先验的变化非常明显,其协方差结构不再有利于类似旋转的转换。总之,我们开发了一种可以在感觉运动任务中估计先验的完整协方差结构的技术,并表明视觉运动转换的先验倾向于类似旋转的结构。此外,通过经历一种新的任务结构,参与者可以适当地改变他们的先验的协方差结构。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c959/3068921/ca6d77f321fa/pcbi.1001112.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验