Suppr超能文献

主动学习新的发声物体:运动再激活和增强视动连通性。

Active learning of novel sound-producing objects: motor reactivation and enhancement of visuo-motor connectivity.

机构信息

Indiana University, IN, USA.

出版信息

J Cogn Neurosci. 2013 Feb;25(2):203-18. doi: 10.1162/jocn_a_00284. Epub 2012 Aug 20.

Abstract

Our experience with the world commonly involves physical interaction with objects enabling us to learn associations between multisensory information perceived during an event and our actions that create an event. The interplay among active interactions during learning and multisensory integration of object properties is not well understood. To better understand how action might enhance multisensory associative recognition, we investigated the interplay among motor and perceptual systems after active learning. Fifteen participants were included in an fMRI study during which they learned visuo-auditory-motor associations between novel objects and the sounds they produce, either through self-generated actions on the objects (active learning) or by observing an experimenter produce the actions (passive learning). Immediately after learning, behavioral and BOLD fMRI measures were collected while perceiving the objects used during unisensory and multisensory training in associative perception and recognition tasks. Active learning was faster and led to more accurate recognition of audiovisual associations than passive learning. Functional ROI analyses showed that in motor, somatosensory, and cerebellar regions there was greater activation during both the perception and recognition of actively learned associations. Finally, functional connectivity between visual- and motor-related processing regions was enhanced during the presentation of actively learned audiovisual associations. Overall, the results of the current study clarify and extend our own previous work [Butler, A. J., James, T. W., & Harman James, K. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations. Journal of Cognitive Neuroscience, 23, 3515-3528, 2011] by providing several novel findings and highlighting the task-based nature of motor reactivation and retrieval after active learning.

摘要

我们在日常生活中的经验通常涉及与物体的物理交互,这使我们能够学习在事件期间感知到的多感官信息与我们创造事件的行为之间的关联。在学习过程中主动交互和物体属性的多感官整合之间的相互作用尚未得到很好的理解。为了更好地理解动作如何增强多感官联想识别,我们研究了主动学习后运动和感知系统之间的相互作用。在 fMRI 研究中,15 名参与者包括在内,他们学习了新颖物体与产生的声音之间的视听运动关联,通过物体上的自我生成动作(主动学习)或通过观察实验者产生动作(被动学习)。在学习后立即进行行为和 BOLD fMRI 测量,同时感知在单感官和多感官训练中用于联想感知和识别任务的物体。主动学习比被动学习更快且更准确地识别视听关联。功能 ROI 分析表明,在运动,躯体感觉和小脑区域中,在主动学习的关联的感知和识别过程中均有更大的激活。最后,在呈现主动学习的视听关联时,视觉和运动相关处理区域之间的功能连接得到增强。总体而言,当前研究的结果阐明并扩展了我们之前的工作[Butler,AJ,James,TW 和 Harman James,K。增强多感官整合和主动运动学习视听关联后的运动重新激活。认知神经科学杂志,23,3515-3528,2011],提供了一些新的发现,并强调了主动学习后运动重新激活和检索的任务性质。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验