Suppr超能文献

通过利用物体重量与其所在位置之间的习得关联,可以快速、低认知负荷地预测物体重量。

Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects.

机构信息

Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.

Department of Neuroscience, Columbia University, New York, New York.

出版信息

J Neurophysiol. 2023 Feb 1;129(2):285-297. doi: 10.1152/jn.00414.2022. Epub 2022 Nov 9.

Abstract

Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties. We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.

摘要

重量预测对于灵巧的物体操纵至关重要。以前的工作主要集中在孤立地举起物体,并研究了物体的视觉外观如何用于预测其重量。在这里,我们测试了一个新颖的假设,即在与多个物体交互时,就像在日常任务中常见的那样,人们利用物体的位置直接预测它们的重量,绕过更慢且更需要处理视觉属性的处理来预测重量。我们使用三维机器人和虚拟现实系统开发了一项任务,参与者在其中看到一组物体。在每次试验中,一个随机选择的物体会移到参与者的手上,他们必须通过产生等效的向上力来预测物体的重量。在各种条件下,我们可以控制物体的视觉外观和/或位置是否可以提供关于其重量的信息。使用此任务和一组类似的基于网络的实验,我们表明,当位置信息可预测物体的重量时,参与者会使用这些信息来实现比仅基于视觉外观预测时更快的预测。我们认为,通过“缓存”位置和重量之间的关联,感觉运动系统可以加快预测速度,同时降低从物体视觉属性预测重量所需的工作记忆需求。我们使用一种新颖的三维机器人接口和虚拟现实系统的物体支撑任务来提供证据,证明物体的位置用于预测其重量。使用位置信息,而不是物体的视觉外观,支持快速预测,从而避免了对工作记忆要求较高的过程。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9bac/9886355/fd4e10ab94a2/jn-00414-2022r01.jpg

相似文献

2
Motor memories of object dynamics are categorically organized.
Elife. 2021 Nov 19;10:e71627. doi: 10.7554/eLife.71627.
4
Representing multiple object weights: competing priors and sensorimotor memories.
J Neurophysiol. 2016 Oct 1;116(4):1615-1625. doi: 10.1152/jn.00282.2016. Epub 2016 Jul 6.
5
Sensorimotor memory for object weight is based on previous experience during lifting, not holding.
Neuropsychologia. 2019 Aug;131:306-315. doi: 10.1016/j.neuropsychologia.2019.05.025. Epub 2019 May 29.
6
Material evidence: interaction of well-learned priors and sensorimotor memory when lifting objects.
J Neurophysiol. 2012 Sep;108(5):1262-9. doi: 10.1152/jn.00263.2012. Epub 2012 Jun 13.
7
Visual delay affects force scaling and weight perception during object lifting in virtual reality.
J Neurophysiol. 2019 Apr 1;121(4):1398-1409. doi: 10.1152/jn.00396.2018. Epub 2019 Jan 23.
8
Sensorimotor prediction and memory in object manipulation.
Can J Exp Psychol. 2001 Jun;55(2):87-95. doi: 10.1037/h0087355.
9
Lift observation conveys object weight distribution but partly enhances predictive lift planning.
J Neurophysiol. 2021 Apr 1;125(4):1348-1366. doi: 10.1152/jn.00374.2020. Epub 2021 Jan 20.
10
Object properties and cognitive load in the formation of associative memory during precision lifting.
Behav Brain Res. 2009 Jan 3;196(1):123-30. doi: 10.1016/j.bbr.2008.07.031. Epub 2008 Aug 5.

引用本文的文献

1
Action Intentions Reactivate Representations of Task-Relevant Cognitive Cues.
eNeuro. 2025 Jun 23;12(6). doi: 10.1523/ENEURO.0041-25.2025. Print 2025 Jun.
3
Ouvrai opens access to remote virtual reality studies of human behavioural neuroscience.
Nat Hum Behav. 2024 Jun;8(6):1209-1224. doi: 10.1038/s41562-024-01834-7. Epub 2024 Apr 26.

本文引用的文献

1
Motor memories of object dynamics are categorically organized.
Elife. 2021 Nov 19;10:e71627. doi: 10.7554/eLife.71627.
2
Dissociable cognitive strategies for sensorimotor learning.
Nat Commun. 2019 Jan 3;10(1):40. doi: 10.1038/s41467-018-07941-0.
3
Neural bases of automaticity.
J Exp Psychol Learn Mem Cogn. 2018 Mar;44(3):440-464. doi: 10.1037/xlm0000454. Epub 2017 Sep 21.
4
Linking actions and objects: Context-specific learning of novel weight priors.
Cognition. 2017 Jun;163:121-127. doi: 10.1016/j.cognition.2017.02.014. Epub 2017 Mar 17.
5
Representing multiple object weights: competing priors and sensorimotor memories.
J Neurophysiol. 2016 Oct 1;116(4):1615-1625. doi: 10.1152/jn.00282.2016. Epub 2016 Jul 6.
7
Integrating actions into object location memory: a benefit for active versus passive reaching movements.
Behav Brain Res. 2015 Feb 15;279:234-9. doi: 10.1016/j.bbr.2014.11.043. Epub 2014 Dec 2.
8
Can we reconcile the declarative memory and spatial navigation views on hippocampal function?
Neuron. 2014 Aug 20;83(4):764-70. doi: 10.1016/j.neuron.2014.07.032.
9
Representation of object weight in human ventral visual cortex.
Curr Biol. 2014 Aug 18;24(16):1866-73. doi: 10.1016/j.cub.2014.06.046. Epub 2014 Jul 24.
10
Two routes to expertise in mental rotation.
Cogn Sci. 2013 Sep-Oct;37(7):1321-42. doi: 10.1111/cogs.12042. Epub 2013 May 15.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验