• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

经典道德困境中人们对人和机器人的判断。

People's judgments of humans and robots in a classic moral dilemma.

机构信息

Brown University, Providence, RI 02912, USA.

Tufts University, Medford, MA 02155, USA.

出版信息

Cognition. 2025 Jan;254:105958. doi: 10.1016/j.cognition.2024.105958. Epub 2024 Oct 2.

DOI:10.1016/j.cognition.2024.105958
PMID:39362054
Abstract

How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.

摘要

普通人如何评价做出具有重大道德意义的决策的机器人?之前的研究发现,人们对人类和机器人的评价既相同也不同,而且在不同的方向上也不同。在 13 项研究中(N=7670),我们要求人们对在规范冲突中做出决策的人类和机器人(经典电车困境的变体)进行评价。我们考察了可能影响人类和机器人代理的道德评价是否相同或不同的几种情况:道德判断的类型(规范与责备);困境的结构(副作用与手段-目的);特定信息的突出性(受害者、结果);文化(日本与美国);以及鼓励同理心。人类和机器人的规范大致相似,但在一种情况下,责备判断表现出明显的不对称:人类因不作为决策而受到的责备比机器人少——具体来说,就是为了多数人的利益而不牺牲一个人。这种不对称可能是因为人们认为人类面临着一个不可能的决定,并且应该因不作为而减轻责备;而在评价机器人时,这种赞赏似乎并不存在。然而,我们对这种解释的证据是混杂的。我们讨论了其他解释,并为未来对人类和机器人的道德判断进行研究提供了方法学指导。

相似文献

1
People's judgments of humans and robots in a classic moral dilemma.经典道德困境中人们对人和机器人的判断。
Cognition. 2025 Jan;254:105958. doi: 10.1016/j.cognition.2024.105958. Epub 2024 Oct 2.
2
Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently?机器和人类在牺牲道德困境中的角色:需要相似但评判不同?
Cognition. 2023 Oct;239:105575. doi: 10.1016/j.cognition.2023.105575. Epub 2023 Jul 28.
3
When does "no" mean no? Insights from sex robots.当“不”意味着“不”时?从性爱机器人中得到的启示。
Cognition. 2024 Mar;244:105687. doi: 10.1016/j.cognition.2023.105687. Epub 2023 Dec 27.
4
Moral Judgments of Human vs. AI Agents in Moral Dilemmas.道德困境中人类与人工智能主体的道德判断
Behav Sci (Basel). 2023 Feb 16;13(2):181. doi: 10.3390/bs13020181.
5
Developmental changes in the perceived moral standing of robots.对机器人道德地位认知的发展变化。
Cognition. 2025 Jan;254:105983. doi: 10.1016/j.cognition.2024.105983. Epub 2024 Nov 9.
6
Norm status, rather than norm type or blameworthiness, results in the side-effect effect.规范状态而非规范类型或应受责备性,会导致副作用效应。
Psych J. 2019 Dec;8(4):513-519. doi: 10.1002/pchj.292. Epub 2019 May 30.
7
Inferences about moral character moderate the impact of consequences on blame and praise.关于道德品质的推断会缓和结果对责备和赞扬的影响。
Cognition. 2017 Oct;167:201-211. doi: 10.1016/j.cognition.2017.05.004. Epub 2017 May 17.
8
Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model.意外情绪对道德困境判断的影响:使用 CNI 模型的分析。
Emotion. 2018 Oct;18(7):989-1008. doi: 10.1037/emo0000399. Epub 2018 Feb 1.
9
Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective.充满指责的道德谴责与有道德能力的机器人:儒家伦理视角
Sci Eng Ethics. 2020 Oct;26(5):2511-2526. doi: 10.1007/s11948-020-00246-w.
10
No luck for moral luck.道德运气不眷顾。
Cognition. 2019 Jan;182:331-348. doi: 10.1016/j.cognition.2018.09.003. Epub 2018 Nov 11.

引用本文的文献

1
Permissibility, Moral Emotions, and Perceived Moral Agency in Autonomous Driving Dilemmas: An Investigation of Pedestrian-Sacrifice and Driver-Sacrifice Scenarios in the Third-Person Perspective.自动驾驶困境中的可允许性、道德情感与感知到的道德能动性:从第三人称视角对行人牺牲和驾驶员牺牲场景的调查
Behav Sci (Basel). 2025 Jul 30;15(8):1038. doi: 10.3390/bs15081038.