• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迈向高效人机协作:注视驱动反馈与参与度对绩效的影响

Towards efficient human-machine collaboration: effects of gaze-driven feedback and engagement on performance.

作者信息

Mitev Nikolina, Renner Patrick, Pfeiffer Thies, Staudte Maria

机构信息

CITEC, Universität des Saarlandes, Campus C7.4 (2.04), Saarbrücken, 66123, Germany.

CITEC, Bielefeld University, Inspiration 1, Bielefeld, 33619, Germany.

出版信息

Cogn Res Princ Implic. 2018 Dec 29;3(1):51. doi: 10.1186/s41235-018-0148-x.

DOI:10.1186/s41235-018-0148-x
PMID:30594976
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6311170/
Abstract

Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified ("No, not that one!") or contrastive ("Further left!"). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.

摘要

在共享环境中进行协作任务解决时,指称成功至关重要。因此,在面对面互动中,人类会利用言语、手势和目光来识别特定物体。我们研究了具有目光感知能力的辅助系统是否以及如何利用人类互动伙伴的目光行为来提高指称成功率。具体而言,我们的系统通过即时语音生成向人类听众描述现实世界中的物体。它持续解读听众的目光,并实施替代策略以对这种隐含反馈做出反应。我们使用这个系统来研究任务执行的最佳策略:从一开始就提供明确、较长的指令,还是以较短但模糊的指令开始。此外,该系统提供目光驱动的反馈,这种反馈可以是未充分说明的(“不,不是那个!”)或对比性的(“再往左!”)。正如预期的那样,我们的结果表明,接着未充分说明反馈的模糊指令对任务执行没有益处,而对比性反馈会带来更快的互动。有趣的是,这种方法甚至优于明确的指令(受试者间的操作)。然而,当系统以交错方式(受试者内)在对最初模糊描述的未充分说明反馈和对比性反馈之间交替时,两种方法的任务执行情况相似。这表明,当听众期望系统具有协作性时,他们会更积极地与系统互动。这一点,而非口头反馈的实际信息量,可能决定了信息获取的效率和任务执行情况。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/c228ab0f1fd4/41235_2018_148_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/f943b38200c0/41235_2018_148_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/b4310c0f8b6f/41235_2018_148_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/200b2a688a40/41235_2018_148_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/3ec5c1880058/41235_2018_148_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/5608abee09f1/41235_2018_148_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4d0e94f1328f/41235_2018_148_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/65f2c4f42b21/41235_2018_148_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4506fe953136/41235_2018_148_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/ad8a0ca9a4fa/41235_2018_148_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/52e9fd039bab/41235_2018_148_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4f618f7f0472/41235_2018_148_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/c228ab0f1fd4/41235_2018_148_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/f943b38200c0/41235_2018_148_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/b4310c0f8b6f/41235_2018_148_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/200b2a688a40/41235_2018_148_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/3ec5c1880058/41235_2018_148_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/5608abee09f1/41235_2018_148_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4d0e94f1328f/41235_2018_148_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/65f2c4f42b21/41235_2018_148_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4506fe953136/41235_2018_148_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/ad8a0ca9a4fa/41235_2018_148_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/52e9fd039bab/41235_2018_148_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/4f618f7f0472/41235_2018_148_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1810/6311170/c228ab0f1fd4/41235_2018_148_Fig12_HTML.jpg

相似文献

1
Towards efficient human-machine collaboration: effects of gaze-driven feedback and engagement on performance.迈向高效人机协作:注视驱动反馈与参与度对绩效的影响
Cogn Res Princ Implic. 2018 Dec 29;3(1):51. doi: 10.1186/s41235-018-0148-x.
2
Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.利用听众的目光来改善动态虚拟环境中的情境交流。
Cogn Sci. 2016 Sep;40(7):1671-1703. doi: 10.1111/cogs.12298. Epub 2015 Oct 16.
3
Do as eye say: gaze cueing and language in a real-world social interaction.照我说的看:现实社会互动中的注视线索与语言。
J Vis. 2013 Mar 11;13(4):6. doi: 10.1167/13.4.6.
4
Investigating joint attention mechanisms through spoken human-robot interaction.通过口语人机交互研究共同注意机制。
Cognition. 2011 Aug;120(2):268-91. doi: 10.1016/j.cognition.2011.05.005. Epub 2011 Jun 12.
5
The influence of speaker gaze on listener comprehension: contrasting visual versus intentional accounts.说话者注视对听众理解的影响:对比视觉与意图性解释。
Cognition. 2014 Oct;133(1):317-28. doi: 10.1016/j.cognition.2014.06.003. Epub 2014 Aug 1.
6
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.当我看到你在看时,我会更快地到达:面对面合作中的人类-人类和人类-机器人注视效应。
Front Neurorobot. 2012 May 3;6:3. doi: 10.3389/fnbot.2012.00003. eCollection 2012.
7
Assessing the Role of Gaze Tracking in Optimizing Humans-In-The-Loop Telerobotic Operation Using Multimodal Feedback.评估注视跟踪在利用多模态反馈优化人在回路中的远程机器人操作中的作用。
Front Robot AI. 2021 Oct 4;8:578596. doi: 10.3389/frobt.2021.578596. eCollection 2021.
8
Gaze in a real-world social interaction: A dual eye-tracking study.现实世界社交互动中的注视:一项双眼追踪研究。
Q J Exp Psychol (Hove). 2018 Oct;71(10):2162-2173. doi: 10.1177/1747021817739221. Epub 2018 Jan 1.
9
The effects of feedback on referential communication of preschool children.反馈对学龄前儿童指称性交流的影响。
J Speech Hear Res. 1982 Jun;25(2):224-9. doi: 10.1044/jshr.2502.224.
10
Infants understand the referential nature of human gaze but not robot gaze.婴儿理解人类注视的指示性本质,但不理解机器人注视的指示性本质。
J Exp Child Psychol. 2013 Sep;116(1):86-95. doi: 10.1016/j.jecp.2013.02.007. Epub 2013 May 6.

引用本文的文献

1
Measuring Collaboration Load With Pupillary Responses - Implications for the Design of Instructions in Task-Oriented HRI.利用瞳孔反应测量协作负荷——对面向任务的人机交互中指令设计的启示
Front Psychol. 2021 Jul 20;12:623657. doi: 10.3389/fpsyg.2021.623657. eCollection 2021.

本文引用的文献

1
Performance in a Collaborative Search Task: The Role of Feedback and Alignment.协作搜索任务中的表现:反馈和一致性的作用。
Top Cogn Sci. 2018 Jan;10(1):55-79. doi: 10.1111/tops.12300. Epub 2017 Nov 13.
2
Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.利用听众的目光来改善动态虚拟环境中的情境交流。
Cogn Sci. 2016 Sep;40(7):1671-1703. doi: 10.1111/cogs.12298. Epub 2015 Oct 16.
3
Eye movements as a window into real-time spoken language comprehension in natural contexts.眼动作为自然情境下实时口语理解的一扇窗口。
J Psycholinguist Res. 1995 Nov;24(6):409-36. doi: 10.1007/BF02143160.
4
Integration of visual and linguistic information in spoken language comprehension.口语理解中视觉与语言信息的整合。
Science. 1995 Jun 16;268(5217):1632-4. doi: 10.1126/science.7777863.