Banks Jaime, Koban Kevin
College of Media and Communication, Texas Tech University, Lubbock, TX, United States.
Department of Communication, University of Vienna, Vienna, Austria.
Front Robot AI. 2021 May 10;8:627233. doi: 10.3389/frobt.2021.627233. eCollection 2021.
Frames-discursive structures that make dimensions of a situation more or less salient-are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents-especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android's (im)moral behavior, and experimentally testing how produced frames prime judgments about an android's morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot's morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.
框架——使情境的各个维度或多或少变得显著的话语结构——被认为会影响人们对新技术的理解。随着技术主体越来越融入社会,发现社会机器人的本土理解(即个体框架)如何与媒体、技术开发者甚至这些主体自身对它们的描述方式(即生成框架)相关联变得很重要。此外,这些个体框架和生成框架可能会影响人们将社会机器人视为合法且值得信赖的主体的方式——尤其是在面对(不)道德行为时。这项三项研究的调查开始通过以下方式填补这一知识空白:1)识别用于解释机器人(不)道德行为的个体持有的框架,并通过实验测试生成框架如何在2)中介表征和3)面对面接触中引导对机器人道德模糊行为的判断。结果表明,人们依靠可辨别的基本规则来解释社会机器人的行为;这些框架对该机器人道德模糊行为的责任判断仅产生有限的影响。证据还表明,技术恐惧引发的抗拒可能会使人们拒绝生成框架而倾向于不同的个体框架。