Suppr超能文献

框架效应对社交机器人(不)道德行为判断的影响。

Framing Effects on Judgments of Social Robots' (Im)Moral Behaviors.

作者信息

Banks Jaime, Koban Kevin

机构信息

College of Media and Communication, Texas Tech University, Lubbock, TX, United States.

Department of Communication, University of Vienna, Vienna, Austria.

出版信息

Front Robot AI. 2021 May 10;8:627233. doi: 10.3389/frobt.2021.627233. eCollection 2021.

Abstract

Frames-discursive structures that make dimensions of a situation more or less salient-are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents-especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android's (im)moral behavior, and experimentally testing how produced frames prime judgments about an android's morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot's morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.

摘要

框架——使情境的各个维度或多或少变得显著的话语结构——被认为会影响人们对新技术的理解。随着技术主体越来越融入社会,发现社会机器人的本土理解(即个体框架)如何与媒体、技术开发者甚至这些主体自身对它们的描述方式(即生成框架)相关联变得很重要。此外,这些个体框架和生成框架可能会影响人们将社会机器人视为合法且值得信赖的主体的方式——尤其是在面对(不)道德行为时。这项三项研究的调查开始通过以下方式填补这一知识空白:1)识别用于解释机器人(不)道德行为的个体持有的框架,并通过实验测试生成框架如何在2)中介表征和3)面对面接触中引导对机器人道德模糊行为的判断。结果表明,人们依靠可辨别的基本规则来解释社会机器人的行为;这些框架对该机器人道德模糊行为的责任判断仅产生有限的影响。证据还表明,技术恐惧引发的抗拒可能会使人们拒绝生成框架而倾向于不同的个体框架。

相似文献

3
Human-like interactions prompt people to take a robot's perspective.类人互动促使人们从机器人的角度看待问题。
Front Psychol. 2023 Oct 10;14:1190620. doi: 10.3389/fpsyg.2023.1190620. eCollection 2023.
9
Holding Robots Responsible: The Elements of Machine Morality.《让机器人负责:机器道德的要素》
Trends Cogn Sci. 2019 May;23(5):365-368. doi: 10.1016/j.tics.2019.02.008. Epub 2019 Apr 5.

本文引用的文献

4
Artificial Moral Agents: A Survey of the Current Status.人工道德代理:现状调查。
Sci Eng Ethics. 2020 Apr;26(2):501-532. doi: 10.1007/s11948-019-00151-x. Epub 2019 Nov 12.
6
In defense of the black box.为黑匣子辩护。
Science. 2019 Apr 5;364(6435):26-27. doi: 10.1126/science.aax0162.
8
Moral Goodness Is the Essence of Personal Identity.道德善良是个人身份的本质。
Trends Cogn Sci. 2018 Sep;22(9):739-740. doi: 10.1016/j.tics.2018.05.006. Epub 2018 Jun 25.
10
Remarks on Parallel Analysis.关于平行分析的评论
Multivariate Behav Res. 1992 Oct 1;27(4):509-40. doi: 10.1207/s15327906mbr2704_2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验