• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

道德困境中人类与人工智能主体的道德判断

Moral Judgments of Human vs. AI Agents in Moral Dilemmas.

作者信息

Zhang Yuyan, Wu Jiahua, Yu Feng, Xu Liying

机构信息

Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China.

School of Marxism, Tsinghua University, Beijing 100084, China.

出版信息

Behav Sci (Basel). 2023 Feb 16;13(2):181. doi: 10.3390/bs13020181.

DOI:10.3390/bs13020181
PMID:36829410
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/
Abstract

Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments ( = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people's moral judgments. Specifically, participants rated AI agents' behavior as more immoral and deserving of more blame than humans' behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people's moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.

摘要

人工智能已迅速融入人类社会,其道德决策也开始慢慢渗透到我们的生活中。对人工智能行为进行道德判断研究的意义日益凸显。本研究旨在考察人们在电车困境(人们通常受可控认知过程驱动)和人行天桥困境(人们通常受自动情感反应驱动)中如何对人工智能主体的行为做出道德判断。通过三项实验(N = 626),我们发现,在电车困境中(实验1),主体类型而非实际行为影响人们的道德判断。具体而言,与人类行为相比,参与者认为人工智能主体的行为更不道德,更应受到指责。相反,在人行天桥困境中(实验2),实际行为而非主体类型影响人们的道德判断。具体而言,与不作为(一种道义行为)相比,参与者认为行动(一种功利行为)的道德性更低、更不可取,道德错误更多且更应受到指责。一项混合设计实验得出了与实验1和实验2一致的结果模式(实验3)。这表明,在不同类型的道德困境中,人们对人工智能采用不同的道德判断模式,这可能是因为人们在不同类型的道德困境中做出道德判断时,使用的是不同的处理系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/5a11ffee0014/behavsci-13-00181-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/61b2548a8e69/behavsci-13-00181-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/f63e0b4909e2/behavsci-13-00181-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/5a11ffee0014/behavsci-13-00181-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/61b2548a8e69/behavsci-13-00181-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/f63e0b4909e2/behavsci-13-00181-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b70/9951994/5a11ffee0014/behavsci-13-00181-g003.jpg

相似文献

1
Moral Judgments of Human vs. AI Agents in Moral Dilemmas.道德困境中人类与人工智能主体的道德判断
Behav Sci (Basel). 2023 Feb 16;13(2):181. doi: 10.3390/bs13020181.
2
Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers.牺牲功利主义判断确实反映了对更大利益的关注:通过过程分离和哲学家的判断进行澄清。
Cognition. 2018 Oct;179:241-265. doi: 10.1016/j.cognition.2018.04.018. Epub 2018 Jul 2.
3
What makes moral dilemma judgments "utilitarian" or "deontological"?是什么使得道德困境判断成为“功利主义的”或“义务论的”?
Soc Neurosci. 2017 Dec;12(6):626-632. doi: 10.1080/17470919.2016.1248787. Epub 2016 Oct 28.
4
Individual and Environmental Correlates of Adolescents' Moral Decision-Making in Moral Dilemmas.青少年在道德困境中道德决策的个体及环境相关因素
Front Psychol. 2021 Nov 24;12:770891. doi: 10.3389/fpsyg.2021.770891. eCollection 2021.
5
People's judgments of humans and robots in a classic moral dilemma.经典道德困境中人们对人和机器人的判断。
Cognition. 2025 Jan;254:105958. doi: 10.1016/j.cognition.2024.105958. Epub 2024 Oct 2.
6
A spiking neuron model of moral judgment in trolley dilemmas.电击神经元模型在电车困境中的道德判断。
Sci Rep. 2024 Sep 17;14(1):21733. doi: 10.1038/s41598-024-68024-3.
7
Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making.道德困境中的后果、规范和普遍不作为:道德决策的 CNI 模型。
J Pers Soc Psychol. 2017 Sep;113(3):343-376. doi: 10.1037/pspa0000086.
8
Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently?机器和人类在牺牲道德困境中的角色:需要相似但评判不同?
Cognition. 2023 Oct;239:105575. doi: 10.1016/j.cognition.2023.105575. Epub 2023 Jul 28.
9
Judging the morality of utilitarian actions: How poor utilitarian accessibility makes judges irrational.评判功利主义行为的道德性:功利主义可及性的匮乏如何使法官变得不理性。
Psychon Bull Rev. 2016 Dec;23(6):1961-1967. doi: 10.3758/s13423-016-1029-2.
10
Fickle Judgments in Moral Dilemmas: Time Pressure and Utilitarian Judgments in an Interdependent Culture.道德困境中的善变判断:时间压力与相互依存文化中的功利主义判断
Front Psychol. 2022 Mar 3;13:795732. doi: 10.3389/fpsyg.2022.795732. eCollection 2022.

引用本文的文献

1
Editorial: Moral psychology of AI.社论:人工智能的道德心理学
Front Psychol. 2024 Mar 11;15:1382743. doi: 10.3389/fpsyg.2024.1382743. eCollection 2024.
2
Do Moral Judgments in Moral Dilemmas Make One More Inclined to Choose a Medical Degree?道德困境中的道德判断会使人更倾向于选择医学学位吗?
Behav Sci (Basel). 2023 Jun 5;13(6):474. doi: 10.3390/bs13060474.

本文引用的文献

1
Resolving the Limitations of the CNI Model in Moral Decision Making Using the CAN Algorithm: A Methodological Contrast.使用CAN算法解决CNI模型在道德决策中的局限性:一种方法对比
Behav Sci (Basel). 2022 Jul 14;12(7):233. doi: 10.3390/bs12070233.
2
Stand up to action: The postural effect of moral dilemma decision-making and the moderating role of dual processes.挺身而出:道德困境决策的姿势效应及其双过程的调节作用。
Psych J. 2021 Aug;10(4):587-597. doi: 10.1002/pchj.449. Epub 2021 Apr 21.
3
CAN Algorithm: An Individual Level Approach to Identify Consequence and Norm Sensitivities and Overall Action/Inaction Preferences in Moral Decision-Making.
CAN算法:一种在道德决策中识别后果与规范敏感性以及总体行动/不行动偏好的个体层面方法。
Front Psychol. 2021 Jan 13;11:547916. doi: 10.3389/fpsyg.2020.547916. eCollection 2020.
4
Moral Judgments.道德判断。
Annu Rev Psychol. 2021 Jan 4;72:293-318. doi: 10.1146/annurev-psych-072220-104358. Epub 2020 Sep 4.
5
People are averse to machines making moral decisions.人们反对机器做出道德决策。
Cognition. 2018 Dec;181:21-34. doi: 10.1016/j.cognition.2018.08.003. Epub 2018 Aug 11.
6
Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making.道德困境中的后果、规范和普遍不作为:道德决策的 CNI 模型。
J Pers Soc Psychol. 2017 Sep;113(3):343-376. doi: 10.1037/pspa0000086.
7
What makes moral dilemma judgments "utilitarian" or "deontological"?是什么使得道德困境判断成为“功利主义的”或“义务论的”?
Soc Neurosci. 2017 Dec;12(6):626-632. doi: 10.1080/17470919.2016.1248787. Epub 2016 Oct 28.
8
The social dilemma of autonomous vehicles.自动驾驶汽车的社会困境。
Science. 2016 Jun 24;352(6293):1573-6. doi: 10.1126/science.aaf2654.
9
Integrating socially assistive robotics into mental healthcare interventions: applications and recommendations for expanded use.将社交辅助机器人融入精神保健干预措施中:扩大应用的建议和应用。
Clin Psychol Rev. 2015 Feb;35:35-46. doi: 10.1016/j.cpr.2014.07.001. Epub 2014 Jul 17.
10
Feeling robots and human zombies: mind perception and the uncanny valley.感受机器人和人类僵尸:心智知觉与恐怖谷
Cognition. 2012 Oct;125(1):125-30. doi: 10.1016/j.cognition.2012.06.007. Epub 2012 Jul 9.