Suppr超能文献

我的护理机器人伙伴:对比可视化技术,有效向身体残障人士传达机器人感知

My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments.

机构信息

Human-Computer Interaction Group, Department of Media Informatics and Communication, Westphalian University of Applied Sciences, 45897 Gelsenkirchen, Germany.

Human-Computer Interaction Group, Paluno-The Ruhr Institute for Software Technology, Faculty of Business Administration and Economics, University of Duisburg-Essen, 45127 Essen, Germany.

出版信息

Sensors (Basel). 2022 Jan 19;22(3):755. doi: 10.3390/s22030755.

Abstract

Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations and against our proposed visualization in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that , a lower complexity visualization, results in significantly faster reaction times compared to , and lower task load compared to both and . Overall, users prefer as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and presents an easy-to-understand alternative.

摘要

如今,机器人在越来越多的领域中与人类密切合作。得益于轻量级材料和安全传感器,这些协作机器人在家庭护理中越来越受欢迎,它们可以在日常生活中为身体残疾人士提供支持。然而,当协作机器人自主执行动作时,对于人类协作方来说,理解和预测它们的行为仍然具有挑战性,而这对于实现信任和用户接受度至关重要。预测协作机器人行为的一个重要方面是理解它们的感知,并了解它们如何“看待”世界。为了解决这个挑战,我们比较了三种不同的用于空间增强现实的可视化技术。所有这些技术都通过直观地指示机器人周围的传感器识别出的物体来传达机器人的感知。我们在一项由身体残疾参与者参与的远程用户实验中比较了广为人知的可视化 和 与我们提出的可视化 ,并在第二项远程实验中使用更广泛的非特定用户群体验证了这些发现。我们的研究结果表明,与 和 相比,复杂度较低的可视化 可以显著缩短反应时间,并且任务负荷更低。总体而言,用户更喜欢 作为更直观的可视化方式。在空间增强现实中,由于投影区域尺寸有限的已知缺点,传统的离屏可视化方式在传达机器人感知方面效果不佳,而 提供了一种易于理解的替代方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/542c/8838221/9d253c19f21d/sensors-22-00755-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验