Suppr超能文献

过度信任的发展:人机交互背景下的实证模拟与心理分析

The Development of Overtrust: An Empirical Simulation and Psychological Analysis in the Context of Human-Robot Interaction.

作者信息

Ullrich Daniel, Butz Andreas, Diefenbach Sarah

机构信息

Department of Computer Science, LMU Munich, Munich, Germany.

Department of Psychology, LMU Munich, Munich, Germany.

出版信息

Front Robot AI. 2021 Apr 13;8:554578. doi: 10.3389/frobt.2021.554578. eCollection 2021.

Abstract

With impressive developments in human-robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [ = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 × 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week; it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human-human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.

摘要

随着人机交互技术的惊人发展,似乎技术无所不能。尤其是在社交机器人领域,由于其拟人化的外形,它们似乎不仅仅是编程机器,人们可能会过度信任机器人的实际能力及其可靠性。这带来了一个严重的问题,特别是当个人幸福可能受到威胁时。因此,关于对机器人过度信任的发展及其影响因素的见解可能构成对策和明智设计决策的重要基础。一项实证研究(n = 110)以宠物喂食机器人为例探讨了过度信任的发展。一个2×2实验设计和重复测量对比了个人经验、技能展示以及通过他人经验报告获得的声誉的影响。实验在视频环境中进行,参与者必须想象他们要进行为期四周的狩猎旅行,并将心爱的的猫留在家里,使用宠物喂食机器人。每天,参与者都必须做出选择:进行一日狩猎旅行而不查看选项(有风险和回报),或者进行无聊的汽车旅行到另一个村庄检查喂食是否成功,如果不成功则激活紧急呼叫(安全但无回报)。与在其他领域(如自动驾驶仪)的过度信任情况类似,喂食机器人在大多数时间里表现完美,直到第四周;它连续三天出现故障,如果参与者在这几天决定进行一日狩猎旅行,就会导致猫死亡。正如预期的那样,随着对机器人喂食可靠性的反复积极体验,信任水平迅速提高,控制呼叫的次数减少。与个人经验相比,技能展示和声誉在很大程度上被忽视,或者只产生了暂时的影响。我们将这些发现整合到一个关于(过度)信任随时间变化的概念模型中,并将其与相关的心理学概念(如实证主义、即时奖励、不恰当的概括、一厢情愿的想法、失调理论)以及来自人际互动的社会概念联系起来。讨论了本研究的局限性以及对机器人设计和未来研究的启示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e9df/8076673/5dcf9f4fe9cc/frobt-08-554578-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验