Frijns Helena Anna, Hirschmanner Matthias, Sienkiewicz Barbara, Hönig Peter, Indurkhya Bipin, Vincze Markus
Institute of Management Science, TU Wien, Vienna, Austria.
Automation and Control Institute, TU Wien, Vienna, Austria.
Front Robot AI. 2024 Apr 16;11:1356827. doi: 10.3389/frobt.2024.1356827. eCollection 2024.
In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system's knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot's knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study ( = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system's limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user's understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.
在人机协作中,故障必然会发生。全面了解潜在错误是必要的,以便机器人系统设计师能够开发出补救故障情况的系统。在这项工作中,我们研究参与者与工作系统交互时发生的故障,尤其关注机器人系统知识库中系统未意识到的错误。如果让人类交互伙伴了解机器人的知识和决策过程,他们可以成为错误检测过程的一部分。我们在人机联合物体组织任务中研究不同的通信方式和共享任务表示的设计。我们进行了一项用户研究(n = 31),参与者向Pepper机器人展示如何整理物体,机器人通过语音、可视化或语音与可视化相结合的方式将学到的物体配置传达给参与者。23名参与者更喜欢多模态的组合条件,其次是7名更喜欢可视化的参与者。基于访谈、出现的错误以及参与者生成的物体配置,我们得出结论,参与者倾向于通过使任务更复杂来测试系统的局限性,这会引发错误。这种试错行为有一个富有成效的目的,并表明故障是由机器人能力、用户的理解和行动以及环境中的交互共同作用产生的。此外,它表明故障在建立更好的用户技术心理模型方面可以有一个富有成效的目的。