Suppr超能文献

工具-对象交互的视觉编码过程中的灵活约束层次结构。

Flexible constraint hierarchy during the visual encoding of tool-object interactions.

机构信息

School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA.

Weill Institute of Neurosciences, University of California, San Francisco, California, USA.

出版信息

Eur J Neurosci. 2021 Oct;54(7):6520-6532. doi: 10.1111/ejn.15460. Epub 2021 Sep 27.

Abstract

Tools and objects are associated with numerous action possibilities that are reduced depending on the task-related internal and external constraints presented to the observer. Action hierarchies propose that goals represent higher levels of the hierarchy while kinematic patterns represent lower levels of the hierarchy. Prior work suggests that tool-object perception is heavily influenced by grasp and action context. The current study sought to evaluate whether the presence of action hierarchy can be perceptually identified using eye tracking during tool-object observation. We hypothesize that gaze patterns will reveal a perceptual hierarchy based on the observed task context and grasp constraints. Participants viewed tool-objects scenes with two types of constraints: task-context and grasp constraints. Task-context constraints consisted of correct (e.g., frying pan-spatula) and incorrect tool-object pairings (e.g., stapler-spatula). Grasp constraints involved modified tool orientations, which requires participants to understand how initially awkward grasp postures can help achieve the task. The visual scene contained three areas of interests (AOIs): the object, the functional tool-end (e.g., spoon handle) and the manipulative tool-end (e.g., spoon bowl). Results revealed two distinct processes based on stimuli constraints. Goal-oriented encoding, the attentional bias towards the object and manipulative tool-end, was demonstrated when grasp did not lead to meaningful tool-use. In images where grasp postures were critical to action performance, attentional bias was primarily between the object and functional tool-end, which suggests means-related encoding of the graspable properties of the object. This study expands from previous work and demonstrates a flexible constraint hierarchy depending on the observed task constraints.

摘要

工具和物体与许多动作可能性相关联,这些可能性会根据呈现给观察者的与任务相关的内部和外部限制而减少。动作层次结构表明,目标代表了层次结构的更高层次,而运动模式则代表了层次结构的较低层次。先前的工作表明,工具-物体感知受到抓握和动作上下文的强烈影响。本研究旨在评估在观察工具-物体时,使用眼动追踪是否可以感知到动作层次结构的存在。我们假设注视模式将根据观察到的任务上下文和抓握约束揭示出感知层次结构。参与者观看了具有两种类型约束的工具-物体场景:任务上下文和抓握约束。任务上下文约束包括正确的(例如,煎锅-锅铲)和不正确的工具-物体配对(例如,订书机-锅铲)。抓握约束涉及工具的修改方向,这需要参与者理解最初笨拙的抓握姿势如何帮助完成任务。视觉场景包含三个感兴趣区域(AOI):物体、工具的功能端(例如,勺子手柄)和可操纵的工具端(例如,勺子碗)。结果根据刺激约束揭示了两个不同的过程。在抓握不会导致有意义的工具使用时,目标导向的编码,即对物体和可操纵的工具端的注意力偏向,得到了证明。在抓握姿势对动作表现至关重要的图像中,注意力主要集中在物体和工具的功能端之间,这表明对物体可抓握特性的与手段相关的编码。这项研究扩展了以前的工作,并展示了一种灵活的约束层次结构,具体取决于观察到的任务约束。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验