Suppr超能文献

二元互动中的注视-动作耦合、注视-手势耦合以及注视的外源性吸引。

Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions.

作者信息

Hessels Roy S, Li Peitong, Balali Sofia, Teunisse Martin K, Poppe Ronald, Niehorster Diederick C, Nyström Marcus, Benjamins Jeroen S, Senju Atsushi, Salah Albert A, Hooge Ignace T C

机构信息

Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, Netherlands.

Information and Computing Sciences, Utrecht University, Utrecht, Netherlands.

出版信息

Atten Percept Psychophys. 2024 Nov;86(8):2761-2777. doi: 10.3758/s13414-024-02978-4. Epub 2024 Nov 18.

Abstract

In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.

摘要

在人际互动中,目光注视可用于获取目标导向行动所需信息、获取与互动伙伴行动相关的信息,以及在多模态交流情境中使用。目前,在视觉情境下,尚无充分整合这三个组成部分的目光注视行为模型。在本研究中,我们旨在揭示并量化在二元协作过程中,个体内部的目光与行动耦合模式、目光与手势及目光与言语的耦合模式,以及一个人的目光与另一个人的手动动作、手势或言语之间的耦合(或目光的外源性吸引)。我们发现,在协作性乐高积木模型复制任务情境中,个体内部的目光与行动耦合最强,其次是个体内部的目光与手势耦合,以及目光与他人行动之间的耦合。当试图从自己或他人的手动动作、手势或言语推断目光位置时,与基线模型相比,仅发现自己的手动动作能带来更好的推断效果。与基于先前研究的预期相反,推断目光位置的改善是有限的。我们认为,在不同手动动作快速相继出现的受限任务中,推断目光位置可能最有效,而在无约束的对话情境中或协作需要更多协商时,目光与手势及目光与言语的耦合可能更强。我们的研究结果可为未来的理论和模型发展提供实证基础,并且在(社交)机器人的行动/意图预测及有效的人机交互方面可能也具有相关性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/639a/11652574/faacadbd994b/13414_2024_2978_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验