Suppr超能文献

在与社交机器人进行的物体组织任务中由人参与的错误检测。

Human-in-the-loop error detection in an object organization task with a social robot.

作者信息

Frijns Helena Anna, Hirschmanner Matthias, Sienkiewicz Barbara, Hönig Peter, Indurkhya Bipin, Vincze Markus

机构信息

Institute of Management Science, TU Wien, Vienna, Austria.

Automation and Control Institute, TU Wien, Vienna, Austria.

出版信息

Front Robot AI. 2024 Apr 16;11:1356827. doi: 10.3389/frobt.2024.1356827. eCollection 2024.

Abstract

In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system's knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot's knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study ( = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system's limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user's understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.

摘要

在人机协作中,故障必然会发生。全面了解潜在错误是必要的,以便机器人系统设计师能够开发出补救故障情况的系统。在这项工作中,我们研究参与者与工作系统交互时发生的故障,尤其关注机器人系统知识库中系统未意识到的错误。如果让人类交互伙伴了解机器人的知识和决策过程,他们可以成为错误检测过程的一部分。我们在人机联合物体组织任务中研究不同的通信方式和共享任务表示的设计。我们进行了一项用户研究(n = 31),参与者向Pepper机器人展示如何整理物体,机器人通过语音、可视化或语音与可视化相结合的方式将学到的物体配置传达给参与者。23名参与者更喜欢多模态的组合条件,其次是7名更喜欢可视化的参与者。基于访谈、出现的错误以及参与者生成的物体配置,我们得出结论,参与者倾向于通过使任务更复杂来测试系统的局限性,这会引发错误。这种试错行为有一个富有成效的目的,并表明故障是由机器人能力、用户的理解和行动以及环境中的交互共同作用产生的。此外,它表明故障在建立更好的用户技术心理模型方面可以有一个富有成效的目的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30d7/11058786/5fcb1789d14c/frobt-11-1356827-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验