• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于(最小)空间信息的人类行为预测的丰富语义事件链建模。

Using enriched semantic event chains to model human action prediction based on (minimal) spatial information.

机构信息

Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany.

Department of Psychology, University of Münster, Münster, Germany.

出版信息

PLoS One. 2020 Dec 28;15(12):e0243829. doi: 10.1371/journal.pone.0243829. eCollection 2020.

DOI:10.1371/journal.pone.0243829
PMID:33370343
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7769489/
Abstract

Predicting other people's upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person's identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action's duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects' touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions' goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.

摘要

预测他人即将采取的行动是成功社交互动的关键。以前的研究已经开始梳理行动观察者利用的各种信息来源,包括物体、运动、上下文线索以及与行为者身份相关的特征。我们在这里关注的是在行动过程中变化的静态和动态物体间空间关系的作用。我们设计了一个虚拟现实设置,并测试了十种不同操作动作的识别速度。重要的是,所有物体都通过用立方体来模拟抽象化,以便参与者不能使用物体信息推断出动作。相反,参与者只能依赖于从立方体之间的空间关系变化中获得的有限信息。尽管存在这些限制,参与者平均能够在不到动作持续时间的 64%的时间内预测动作。此外,我们采用了一种计算模型,即所谓的丰富语义事件链(eSEC),它结合了不同类型空间关系的信息:(a) 物体的接触/不接触,(b) 物体之间的静态空间关系,(c) 物体在动作过程中的动态空间关系。假设 eSEC 作为一个潜在的模型,我们使用信息理论分析表明,人类在预测动作时主要依赖于混合线索策略。基于机器的动作预测能够根据单个线索更快地做出决策。我们认为,尽管人类的策略较慢,但对于预测具有更多可变或部分信息源的自然和更复杂的动作可能特别有益。我们的研究结果有助于理解个体如何在完全实现目标之前推断出观察到的动作的目标,并且可能为构建无冲突的人机协作机器人开辟新途径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/f88df0a37e4f/pone.0243829.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/921775602f65/pone.0243829.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/ac271304d168/pone.0243829.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/8b47b68f2032/pone.0243829.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/fc8c590b5a21/pone.0243829.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/32531f4127be/pone.0243829.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/1242d979bd09/pone.0243829.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/bdd231f7e086/pone.0243829.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/894c06806681/pone.0243829.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/87d014d2e052/pone.0243829.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/c966cbf2a641/pone.0243829.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/f88df0a37e4f/pone.0243829.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/921775602f65/pone.0243829.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/ac271304d168/pone.0243829.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/8b47b68f2032/pone.0243829.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/fc8c590b5a21/pone.0243829.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/32531f4127be/pone.0243829.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/1242d979bd09/pone.0243829.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/bdd231f7e086/pone.0243829.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/894c06806681/pone.0243829.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/87d014d2e052/pone.0243829.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/c966cbf2a641/pone.0243829.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d63/7769489/f88df0a37e4f/pone.0243829.g011.jpg

相似文献

1
Using enriched semantic event chains to model human action prediction based on (minimal) spatial information.基于(最小)空间信息的人类行为预测的丰富语义事件链建模。
PLoS One. 2020 Dec 28;15(12):e0243829. doi: 10.1371/journal.pone.0243829. eCollection 2020.
2
Making sense of objects lying around: How contextual objects shape brain activity during action observation.理解周围的物体:语境物体如何影响动作观察时的大脑活动。
Neuroimage. 2018 Feb 15;167:429-437. doi: 10.1016/j.neuroimage.2017.11.047. Epub 2017 Nov 22.
3
Humans Predict Action using Grammar-like Structures.人类使用类似语法的结构来预测动作。
Sci Rep. 2020 Mar 4;10(1):3999. doi: 10.1038/s41598-020-60923-5.
4
Integrated contextual representation for objects' identities and their locations.用于对象身份及其位置的集成上下文表示。
J Cogn Neurosci. 2008 Mar;20(3):371-88. doi: 10.1162/jocn.2008.20027.
5
Observing human-object interactions: using spatial and functional compatibility for recognition.观察人与物体的交互:利用空间和功能兼容性进行识别。
IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):1775-89. doi: 10.1109/TPAMI.2009.83.
6
Facilitation of allocentric coding by virtue of object-semantics.借助物体语义实现自我中心编码。
Sci Rep. 2019 Apr 18;9(1):6263. doi: 10.1038/s41598-019-42735-4.
7
Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments.场景语义会影响自然(虚拟)环境中动作的以自我为中心的空间编码。
Sci Rep. 2024 Jul 5;14(1):15549. doi: 10.1038/s41598-024-66428-9.
8
The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.场景要点及物体间空间依存关系在现实场景中注意力语义引导方面的作用。
Vision Res. 2014 Dec;105:10-20. doi: 10.1016/j.visres.2014.08.019. Epub 2014 Sep 6.
9
Selecting object pairs for action: Is the active object always first?选择用于动作的对象对:主动对象总是排在第一位吗?
Exp Brain Res. 2015 Aug;233(8):2269-81. doi: 10.1007/s00221-015-4296-7. Epub 2015 May 1.
10
Conflict between object structural and functional affordances in peripersonal space.个人空间周边物体结构与功能可供性之间的冲突。
Cognition. 2016 Oct;155:1-7. doi: 10.1016/j.cognition.2016.06.006. Epub 2016 Jun 18.

引用本文的文献

1
People can reliably detect action changes and goal changes during naturalistic perception.人们在自然感知过程中能够可靠地察觉动作变化和目标变化。
Mem Cognit. 2024 Jul;52(5):1093-1111. doi: 10.3758/s13421-024-01525-8. Epub 2024 Feb 5.
2
The Social Robot and the Digital Physiotherapist: Are We Ready for the Team Play?社交机器人与数字物理治疗师:我们准备好团队协作了吗?
Healthcare (Basel). 2021 Oct 27;9(11):1454. doi: 10.3390/healthcare9111454.
3
The Social Robot in Rehabilitation and Assistance: What Is the Future?康复与辅助领域中的社交机器人:未来何去何从?

本文引用的文献

1
Humans Predict Action using Grammar-like Structures.人类使用类似语法的结构来预测动作。
Sci Rep. 2020 Mar 4;10(1):3999. doi: 10.1038/s41598-020-60923-5.
2
Predictive Impact of Contextual Objects during Action Observation: Evidence from Functional Magnetic Resonance Imaging.动作观察过程中情境物体的预测作用:来自功能磁共振成像的证据。
J Cogn Neurosci. 2020 Feb;32(2):326-337. doi: 10.1162/jocn_a_01480. Epub 2019 Oct 16.
3
Neural correlates of action: Comparing meta-analyses of imagery, observation, and execution.动作的神经关联:对表象、观察和执行的元分析进行比较。
Healthcare (Basel). 2021 Feb 25;9(3):244. doi: 10.3390/healthcare9030244.
Neurosci Biobehav Rev. 2018 Nov;94:31-44. doi: 10.1016/j.neubiorev.2018.08.003. Epub 2018 Aug 9.
4
Making sense of objects lying around: How contextual objects shape brain activity during action observation.理解周围的物体:语境物体如何影响动作观察时的大脑活动。
Neuroimage. 2018 Feb 15;167:429-437. doi: 10.1016/j.neuroimage.2017.11.047. Epub 2017 Nov 22.
5
A fast, invariant representation for human action in the visual system.视觉系统中人类动作的一种快速、不变的表示。
J Neurophysiol. 2018 Feb 1;119(2):631-640. doi: 10.1152/jn.00642.2017. Epub 2017 Nov 8.
6
Action at its place: Contextual settings enhance action recognition in 4- to 8-year-old children.就地行动:情境设置增强4至8岁儿童的动作识别能力。
Dev Psychol. 2017 Apr;53(4):662-670. doi: 10.1037/dev0000273. Epub 2017 Feb 9.
7
Understanding the Goals of Everyday Instrumental Actions Is Primarily Linked to Object, Not Motor-Kinematic, Information: Evidence from fMRI.理解日常工具性动作的目标主要与物体信息相关,而非运动学信息:来自功能磁共振成像的证据
PLoS One. 2017 Jan 12;12(1):e0169700. doi: 10.1371/journal.pone.0169700. eCollection 2017.
8
Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations.动作加工的神经和计算机制:视觉和运动表象的相互作用。
Neuron. 2015 Oct 7;88(1):167-80. doi: 10.1016/j.neuron.2015.09.040.
9
Objects Mediate Goal Integration in Ventrolateral Prefrontal Cortex during Action Observation.在动作观察期间,客体在腹外侧前额叶皮层中介导目标整合。
PLoS One. 2015 Jul 28;10(7):e0134316. doi: 10.1371/journal.pone.0134316. eCollection 2015.
10
Predicting goals in action episodes attenuates BOLD response in inferior frontal and occipitotemporal cortex.预测动作情节中的目标会减弱额下回和枕颞叶皮层的血氧水平依赖反应。
Behav Brain Res. 2014 Nov 1;274:108-17. doi: 10.1016/j.bbr.2014.07.053. Epub 2014 Aug 6.