Suppr超能文献

虚拟现实中物体提升过程中的视觉时滞会影响力的缩放和重量感知。

Visual delay affects force scaling and weight perception during object lifting in virtual reality.

机构信息

Department of Movement Sciences and Leuven Brain Institute, KU Leuven , Leuven , Belgium.

Sobell Department of Motor Neuroscience and Movement Disorders, Institute of Neurology, University College London , London , United Kingdom.

出版信息

J Neurophysiol. 2019 Apr 1;121(4):1398-1409. doi: 10.1152/jn.00396.2018. Epub 2019 Jan 23.

Abstract

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants' movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.

摘要

提起物体需要根据对物体重量的预测精确调整指尖力。在与物体接触时,会出现一系列触觉和视觉事件,需要在线快速处理,以微调用于提起物体的计划运动指令。在需要手与物体相互作用的动作中,暂态感觉运动事件中多感觉整合的大脑机制尚未得到理解。在这项研究中,我们测试了在将触觉和视觉信号在线集成到运动指令中时,它们之间的相对权重。我们使用新的虚拟现实设置使视觉反馈与触觉解耦,这使我们能够在参与者使用两个力反馈机器人模拟的虚拟物体进行抓握时,探测触觉和视觉在驱动参与者运动方面的相对贡献,当没有视觉延迟时。我们进一步通过操纵延迟视觉事件对力分布产生偏差的程度来模拟视觉对运动输出的影响,这使我们能够确定大脑分配给触觉和视觉的特定权重。我们的研究结果首次表明了在控制物体提升动力学时,离散感觉运动事件中如何处理视触觉整合,并进一步强调了在线控制动作和感知的多感觉信号的组织。新的和值得注意的灵巧的手部运动需要在运动展开过程中不同的关键时间点快速整合来自不同感觉的信息,特别是触觉和视觉。用于物体操作的视觉和触觉之间的相对权重是未知的。我们使用虚拟现实中的物体提升来使视觉和触觉反馈失步,并找出它们的相对权重。我们的研究结果阐明了在一系列离散的感觉运动控制点上如何快速处理快速多感觉整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eba3/6485735/42fc27b6c7c4/z9k0031949760001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验