Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.
Department of Neuroscience, Columbia University, New York, New York.
J Neurophysiol. 2023 Feb 1;129(2):285-297. doi: 10.1152/jn.00414.2022. Epub 2022 Nov 9.
Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties. We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.
重量预测对于灵巧的物体操纵至关重要。以前的工作主要集中在孤立地举起物体,并研究了物体的视觉外观如何用于预测其重量。在这里,我们测试了一个新颖的假设,即在与多个物体交互时,就像在日常任务中常见的那样,人们利用物体的位置直接预测它们的重量,绕过更慢且更需要处理视觉属性的处理来预测重量。我们使用三维机器人和虚拟现实系统开发了一项任务,参与者在其中看到一组物体。在每次试验中,一个随机选择的物体会移到参与者的手上,他们必须通过产生等效的向上力来预测物体的重量。在各种条件下,我们可以控制物体的视觉外观和/或位置是否可以提供关于其重量的信息。使用此任务和一组类似的基于网络的实验,我们表明,当位置信息可预测物体的重量时,参与者会使用这些信息来实现比仅基于视觉外观预测时更快的预测。我们认为,通过“缓存”位置和重量之间的关联,感觉运动系统可以加快预测速度,同时降低从物体视觉属性预测重量所需的工作记忆需求。我们使用一种新颖的三维机器人接口和虚拟现实系统的物体支撑任务来提供证据,证明物体的位置用于预测其重量。使用位置信息,而不是物体的视觉外观,支持快速预测,从而避免了对工作记忆要求较高的过程。